ACO Faculty

CS: Algorithms & Complexity Group Math: Discrete Mathematics Group Tepper: Operations Research Group

Computer Science: Algorithms & Complexity Group

  • Nina Balcan

    Main research interests: machine learning, computational aspects in economics and game theory, algorithms

  • Guy Blelloch

    Main research interests include parallel algorithms and languages

  • Mor Harchol-Balter

    I work on the design and performance analysis of computer systems. Much of this work is theoretical, involving proving new theorems in stochastic processes and queueing theory and also finding ways to solve really hard Markov chains. I look for students who are very strong in math and probabilistic thinking.

    I am particularly interested in resource allocation/scheduling policies for distributed computer systems. This includes inventing and analyzing new load sharing policies, routing policies, cycle stealing policies, power management policies and multi-class/multi-server prioritization. I am also interested in heavy-tailed workloads and their effect on scheduling.

    Some techniques that my group and I have invented include:
    (i) Recursive Dimensionality Reduction (RDR), a technique that allows one to reduce a Markov chain that grows unboundedly in many dimensions to a Markov chain that grows unboundedly in only one dimension, by using the idea of busy period transitions;
    (ii) Recursive Renewal Reward (RRR), a technique that allows one to obtain closed-form solutions for many one-dimensional infinite repeating Markov chains that come up in computer systems;
    (iii) The All-Can-Win Theorem, a demonstration that scheduling policies which are biased towards favoring small jobs can also be preferable to large jobs.

  • Ryan O'Donnell

    My research interests are:

  • Steven Rudich

    My main research areas are in complexity theory, cryptography, combinatorics, and probability. I am also interested in inference, learning theory, and randomized algorithms. I love counter-intuitive puzzles.

    Here is a list of questions I find interesting (and which I am unable to answer):

    1. If a problem has no polynomial-time algorithm, does this mean that no QUANTUM computer can solve it in polynomial-time?
    2. Is there a graph of size O(n) which contains every n-node forest as a vertex-induced subgraph?
    3. [Due to Avrim Blum] Suppose I am using one of two methods to pick poly(n) n-bit strings: a) Select them uniformly at random; b) Fix some boolean function f on log(log(n)) bits, fix as ``relevant'' log(log(n)) bit positions randomly chosen from 1 to n-1, pick each n-bit string by randomly choosing the first n-1 bits and appending f of the relevant bits. Is there an efficient method to look over the list I produce and guess which method I am using?
    4. Can the security of secret-key agreement be provably reduced to the existence of a one-way function?
    5. Is there an efficient secret sharing scheme for all monotone access structures?
    6. P=NP? P=BPP? P=PSPACE? Or more modestly, is there a function in NP which can't be computed by linear-sized, log-depth circuits?
    7. Is there a reasonable combinatorial property which would make any function which possessed it hard to compute?

  • Tuomas Sandholm

    My main research interests are in market design, game theory, optimization (integer programming, search, stochastic optimization, and convex optimization), mechanism design, electronic commerce, artificial intelligence, multiagent systems, auctions and exchanges, automated negotiation and contracting, equilibrium finding, algorithms for solving games, markets for advertising, kidney exchange, voting, coalition formation, preference elicitation, normative models of bounded rationality, resource-bounded reasoning, computational economics, privacy, machine learning, multiagent learning, and networks.

    Many of our systems have been commercially fielded.

    The following are just some examples of my current projects:

    Solving Games, Poker.
    We design algorithms for abstracting games and algorithms for solving for the equilibrium of a game, i.e., the best way to play it. For example, we have developed a lossless abstraction algorithm that enabled us to exactly solve poker games over 10,000 times larger than the largest solved previously. Our algorithms have also yielded some of the best poker playing programs for Texas Hold~em. We are also developing opponent modeling and exploitation techniques. Our work also involves complexity analysis and developing new game-theoretic solution concepts.

    Expressive Auctions and Exchanges.
    We are developing the theory and methodology of how one should design markets when the participants have rich preferences over outcomes. One of my startups has fielded 800 of the most combinatorial markets ever developed, with a trading volume of over $60 billion.

    Advertising Markets and Electricity Markets.
    We are developing techniques, and founded a new startup company, in the space of expressive advertising markets, such as for TV and online advertising. We may also apply the approaches to electricity markets.

    Kidney Exchange.
    We have developed the market designs and algorithms that run the nationwide kidney exchange at UNOS. Our current research focuses on developing better models and algorithms for the dynamic problem.

    Search and Integer Programming.
    For example, how should those tree search algorithms branch? Also, we are developing techniques for automated and recursive decompositions.

    Revenue Maximization and Automated Mechanism Design.
    We are pioneering the idea of using optimization algorithms to automatically design the rules of a game (such as an auction) so that the designer's objective is maximized even though the participants play based on self-interest. An example application is revenue-maximizing combinatorial auctions. Another example is channel abstraction (bundling, automated item definition), such as in TV and Internet display advertising.

    Mechanisms for Computationally Limited Agents.
    If agents are computationally limited, how should they allocate their deliberation? What should good mechanisms (e.g., auctions) look like for such agents? It turns out that sometimes mechanisms that lead to greater systemwide good can be designed for computationally limited agents than unlimited ones!

    You can check out my papers via my home page.

  • Daniel Sleator

    I have worked in a variety of different areas of computer science, including amortized analysis of algorithms, self-adjusting data structures, competitive algorithms, natural language parsing, computer game playing, synthesis of musical sounds, and persistent data structures.

    Natural Language:
    I (jointly with coauthor Davy Temperley) wrote a parser for English. The system (which we call a link grammar) is unlike phrase structure parsing or context free parsing. The scheme is elegant and simple, and our grammar captures a very wide variety of complex phenomena in English. We (John Lafferty and I) plan to use this as a basis for a new statistical model of language. This work on language is described in two technical reports: CMU-CS-91-196, CMU-CS-92-181.

    Competitive Algorithms:
    Consider the idealized problem of deciding whether to rent or buy skis. You're about to go skiing. The cost of renting skis is $20, the cost of buying them is $400. Clearly if you knew that you were going to go skiing more than twenty times, then you could save money by immediately buying skis. If you knew that you would go skiing fewer than twenty times, the it would be prudent to always rent skis. However, suppose that you can not predict the future at all, that is, you never know until after one ski trip ends if you will ever go skiing again. What strategy would you use for deciding whether to rent or buy skis? Your goal is to minimize the ratio of the cost that you incur to the cost you would incur if you could predict the future. (Hint: you can come within a factor of two.)
    Since the simple principle behind this example turns out to be very useful we have given it a name. A competitive algorithm is an on-line algorithm (it must process a sequence of requests, and it must process each request in the sequence immediately, without knowing what the future requests will be), whose performance is within a small constant factor of the performance of the optimal off-line algorithm for any sequence of requests.
    My collaborators and I have discovered a surprising variety of practical problems for which there exist very efficient competitive algorithms. We have also developed a partial theory of competitive algorithms. However there remain many interesting open problems, from discovering competitive algorithms for specific problems, to answering general questions about when such algorithms exist.

    Data Structures:
    Data structure problems are typically formulated in terms of what types of operations on the data are required, and how fast these operations should take place. A worst-case analysis of the performance of a data structure is a bound on the performance of any operation. An amortized analysis of a data structure bounds the performance of the structure on a sequence of operations, rather than a single operation. It turns out that by only requiring amortized efficiency (rather than worst-case), a variety of new and elegant solutions to old data structure design problems become possible. My collaborators and I have devised a number of such solutions (splay trees, skew heaps, fibonacci heaps, self-adjusting lists, persistent data structures, etc.), and I continue to have a strong interest in data structures and amortized analysis.

    Interactive Network Games:
    I wrote and maintain the internet chess server. This very popular service allows people from all over the world to play chess, observe others playing, and communicate with each other in various ways. A rating is computed for each player as a function of his or her performance. You can try it yourself with ``telnet ics.uoknor.edu 5000''.

  • Math: Discrete Mathematics Group

  • Thomas A. Bohman

    My research is in extremal and probabilistic combinatorics; in particluar, recent work has been on additive number theory, the Shannon capacities of graphs, the combinatorics of cellular automata, random graphs and randomized algorithms. Shannon capacity is a prime example of the close link between extremal combinatorics and information theory. This graph invariant arises as a measure of the zero-error perfomance of an associated noisy communication channel but leads to very natural (and notoriously difficult) questions in extremal combinatorics. The aim of the work on cellular automata is not the study of models that have particular applications to other scientific fields (there are many such applicable models, but very few of them have succumbed to rigorous mathematical treatment) but rather to study those models that appear most ameniable to mathematical analysis, with the goal of developing methods that can be applied to other, more intricate models.

  • Boris Bukh

    There is no ugly mathematics, only mathematics that I do not understand. Some mathematics not only I do not understand, but I do not understand why I do not understand. For some I do not even understand why nobody understands. That latter kind is the mathematics that I like to work on. Thus far, this led me to working on a variety of problems that are a mix of combinatorics, geometry, and number theory. The concrete problems change often, so to learn what I do now, knock on my office door!

  • Alan Frieze

    Research interests lie in combinatorics, discrete optimization and theoretical computer science. Current focus is on probabilistic aspects, in particular, the study of the asymptotic properties of random graphs and the average case analysis of algorithms. Recent work has been on a randomized algorithm for approximating the volume of convex bodies, the Hamilton cycle problem in various models of a random graph and the use of martingale inequalities in the study of random graphs. I have also been working on exact algorithms for NP-hard problems with a good average case performance. I also have a strong interest in complexity theory.

  • Florian Frick

  • Po-Shen Loh

    I am interested in problems that lie at the vibrant intersection of Combinatorics, Probability, and Theoretical Computer Science. These include questions in Ramsey Theory (and more generally Extremal Combinatorics), where the Probabilistic Method has found many surprising applications in otherwise deterministic settings. My work also considers discrete systems that are inherently probabilistic, from static structures such as random graphs to dynamic processes such as randomized algorithms and combinatorial games. As these fields are closely related, there is much opportunity for developments from one side to find applications in another.

  • Wesley Pegden

  • Prasad Tetali

    I am broadly interested in Discrete Math, Probability, and Theory of Computing: Markov chains, Isoperimetry and Functional Analysis, Combinatorics, Computational Number Theory, and Algorithms.

  • Konstantin Tikhomirov

    I am interested in Discrete Probability, Combinatorics, Convex Geometry, and Applications to Data Analysis.

  • Michael Young

    My primary area of research is in Discrete Mathematics, primarily Graph Theory and Combinatorics. I also have an interest in equity and inclusion in mathematics, in particular, the impacts that race and mathemactics culture and learning have on one another.

  • Tepper: Operations Research Group

  • Fatma Kılıç-Karzan

    My research involves the development of new theory and algorithms for solving problems in large-scale convex optimization as well as structured nonconvex optimization. It is mainly motivated by applications in decision making under uncertainty, machine learning and high dimensional statistical inference domains.

  • Benjamin Moseley

    My research interests are broadly in theoretical computer science and operations research. I work on the design, analysis and evaluation of algorithms. I am currently working on scheduling theory, distributed computing, and the foundations of machine learning.

  • Javier Peña

    My research interests are concerned with the mathematical and computational aspects of optimization algorithms, especially interior-point methods for convex optimization. Currently, I am investigating properties of condition numbers for linear programming. Like traditional condition numbers for linear equations, these numbers aim to capture how changes in the data of the linear program affect properties of the solutions. The study of condition numbers lies in the intersection of optimization and numerical analysis; it combines ideas from interior-point methods, convex analysis, perturbation theory, and complexity theory.

  • R. Ravi

    Research interests include approximation algorithms, combinatorial optimization, and computational biology.

  • Michael Trick

    Research is in combinatorial and network optimization. Recent work has been on computational aspects of integer and network optimization along with data structures for efficient implementation. Current research has concentrated on hard integer programs with underlying network structures. Also interested in the application of algorithm design and complexity theory in the social and biological sciences.

  • Willem-Jan Van Hoeve

    Main research interests: methodologies: operations research; optimization; constraint programming; hybrid solution methods; decision diagrams for optimization Applications: vehicle routing, scheduling, network design


  • Back to the ACO home page