The seminar is in Wean 4601 on Monday's from
12:30pm to 1:30pm. |

October 15, 2001: Geoff Atkinson.

October 29, 2001: Nick Wormald.

November 12, 2001: Amitabh Sinha.

November 19, 2001: Cliff Smyth.

November 26, 2001: Konstantin Andreev.

December 3, 2001: Artur Czumaj.

December 10, 2001: Shang-Hua Teng.

February 11, 2002: Jason Fulman.

February 18, 2002: Thomas A. Bohman.

February 25, 2002: Kent Hoj Andersen.

March 4, 2002: Venkatesh Natarajan.

March 18, 2002: Michael Perregaard.

March 25, 2002: Alan Frieze.

April 15, 2002: Lisa K. Fleischer.

April 22, 2002: Jochen Konemann.

May 6, 2002: Earl Glen Whitehead, Jr.

**Title:** Several Results on the b-independence of random graphs.

**Abstract:**
A set of vertices from a graph is independent if, for any two vertices in
the set, there are no edges between those vertices. This idea has been
widely studied in graph theory, and several results are known about the
size of the largest independent set in a random graph (Bollobas & Erdos,
Matula, Frieze, Freize & Luczak). A b-independent set is a set of
vertices such that for any two vertices in the set, there are no paths of
length b or less between those vertices. I will prove two results on the
largest b-independent number of a random graph. First, I will consider
the case b is a constant in the random graph, Gnp, where each edge is
independently included in the graph with probability p. Second, I will
consider the case b=2 in a random regular graph.

**Title:** Models of random regular graphs.

**Abstract:**
This is a survey of results on properties of random regular graphs,
together with an exposition of some of the main methods of obtaining these
results. A major feature in this area is the small subgraph
conditioning method. This establishes a relationship between random
regular graphs with uniform distribution, and other non-uniform models of
random regular graphs in which the probability of a graph occurring is
weighted according to some random variable. Information can be obtained in
this way on the probability of existence of various types of spanning
subgraphs, such as Hamilton cycles and decompositions into perfect
matchings, as well as other properties such as smallest dominating set.
Some results about interesting non-uniform models appear as spin-offs from
the same method.

**Title:** Approximating k-cuts via network strength.

**Abstract:**
Given an undirected edge-weighted connected graph, the k-cut problem
is to partition the vertex set into k non-empty connected
components so as to
minimize the total weight of edges whose end points are in different
components. The problem is NP-hard.
In the first half of this talk, we will briefly look at some earlier
work on this problem (Goldschmidt & Hochbaum, 1988; Karger & Stein,
1996; Saran & Vazirani, 1995; Naor & Rabani, 2001). In the second
half, we will examine a recent approach which uses polynomial
time algorithms for network strength (Cunningham, 1985) to approximate
an optimal k-cut. This is joint work with R. Ravi.

**Title:** Reimer's inequality and Rudich's conjecture.

**Abstract:**
We prove Rudich's conjecture, 1984, which arose from his work on reducibility
among cryptographic primitives. Roughly the conjecture states that if a family
of subcubes of the discrete cube is a near partition, then there is a
significant dependence among the family. The subcubes are viewed as events
where a point is selected from the cube according to a product distribution. To
prove this we use a correlation inequality that is in some sense "dual" to
Reimer's inequality (a.k.a. the van den Berg-Kesten conjecture). We also use
this inequality to prove an upper bound on the approximate decision tree
complexity of a boolean function that is quadratic in its approximate
certificate complexity, answering a question of Tardos, 1989.
This is joint work with Jeff Kahn and Mike Saks.

**Title:** Error correcting codes using expander graphs.

**Abstract:**
Expander graphs have been the subject of much study in combinatorics
and computer
science. Recently they were used in a series of constructions of asymptotically
good error
correcting codes. This is a survey of results on error correcting codes that
use expander
graphs. First we will look at Expander codes. These are linear time decodable
codes
introduced by D. Spielman and M. Sipser. Then we will look at a recent work by
M. Mitzenmacher, M. Luby and D. Spielman which is a hybrid between Expander
codes and
low density parity check codes which enhances the error correction rate
and moves closer to the Shannon capacity bound.

**Title:** Selfish Traffic Allocation for Server Farms.

**Abstract:**
We study the price of selfish routing in non-cooperative networks,
like the Internet. We investigate the notion of the worst-case
coordination ratio in networks as recently introduced by Koutsoupias
and Papadimitriou. The analysis of the coordination ratio seeks the
price of uncoordinated individual utility-maximizing decisions (``the
price of anarchy''). Koutsoupias and Papadimitriou proposed that
model to study routing problems in which a set of several agents want
to send traffic from the sources to destinations along parallel links
with linear cost functions.

In this talk, we will first consider linear cost functions, and then we will discuss general, monotone cost functions as well as families of cost functions motivated by queueing theory. Our special focus lies on cost functions that describe the behavior of Web servers that can open only a limited number of TCP connections.

For the worst-case coordination ratio in the simplest system consisting of m parallel links with linear cost functions and show that it is Theta ( log(m) / logloglog(m) ). Our bound is asymptotically tight and it resolves an open question posed recently by Koutsoupias and Papadimitriou . We also obtain a tight bound in the special case with identical links (i.e., all of the same speed), and show that it is (1+o(1)) log(m) / loglog(m) .

Then, we consider arbitrary cost functions. Our results can be summarized as follows.

* We give an exact characterization of those cost functions that have bounded/unbounded coordination ratio. For example, we show that polynomial functions have bounded ratio whereas exponential functions have unbounded ratio.

* We present the first models and results that take into account that Internet traffic is not homogeneous and session lengths are not exponentially distributed. In particular, we assume that the lengths of TCP sessions follow general probability distributions and different data streams may use different distributions.

* We show that monotone queueing theoretical cost functions behave extremely bad under game theoretic measures. In fact, the increase in cost due to selfish behavior is unbounded. Moreover, under bicriteria measures selfish behavior can lead to a bandwidth degradation by a factor m, where m is the total number of servers (or parallel links).

* Finally, we consider server systems that reject packets in case of overload. Again we obtain an unbounded coordination ratio on the fraction of rejected requests. On the positive side, however, we show that the fraction of rejections can be bounded above by a small term that is polynomial in the number of servers and even exponential in the number of TCP connections that can be opened simultaneously.

Joint work with Piotr Krysta and Berthold Voecking (MPI Saarbruecken,
Germany)

**Title:** Why the Simplex Algorithm Usually takes Polynomial Time.

**Abstract:**
Theorists have been challenged by the existence of
remarkable algorithms that are known by scientists and
engineers to work well in practice, but whose theoretical
analyses are negative or inconclusive. The root of the problem
is that algorithms are usually analyzed in one of two ways: by
worst-case or average-case analysis. The former can improperly
suggest that an algorithm will perform poorly, while the latter
can be overly optimistic because the random inputs it considers
may bear little resemblance to those encountered in practice.

We propose an analysis that we call smoothed analysis that can help explain the success of many algorithms that both worst-case and average-case cannot. In smoothed analysis, we measure the performance of an algorithm under slight random perturbations of arbitrary inputs. In particular, we consider Gaussian perturbations of inputs to algorithms that take real and complex inputs, and we measure the running time of algorithms in terms of the input size and the variance of the perturbations.

We show that the simplex algorithm has polynomial smoothed complexity. The simplex algorithm is the classic example of an algorithm that performs well in practice but takes exponential time in the worst case. In particular, we prove for every matrix $A$ with entries of absolute value at most $1$, every vector $c$, and every vector $b$ whose entries are $1$ or $-1$, we show that the simplex algorithm using the shadow-vertex pivot rule takes expected time polynomial in $1/sigma$ and the sizes of $A$ and $c$ to solve minimize c'x subject to (A+ \sigma G) x <=b, where $\sigma G$ is a matrix of Gaussian random variables of mean 0 and variance $\sigma^2$.

This is joint work with Dan Spielman (MIT).

**Title:** GL(n,q) and increasing subsequences in nonuniform random
permutations.

**Abstract:**
Motivated by random matrices over finite fields, we study random
integer partitions. Exploiting connections with identities from q-series,
we derive bounds on the length of the longest increasing subsequence in
nonuniform random permutations. This is different from the traditional
methods (which we shall survey) for studying the longest increasing
subsequence.

**Title:** Linear versus Hereditary Discrepancy.

**Abstract:**
The concept of hypergraph discrepancy provides a unified combinatorial
framework for handling classical discrepancy problems from geometry and
number theory. Since the discrepancy of a hypergraph can be small for
somewhat arbitrary reasons, variants of hypergraph discrepancy have been
introduced in the hopes of getting a better measure of the `intrinsic
balance' of a hypergraph. Two such variants are linear discrepancy and
hereditary discrepancy. Lovasz, Spencer and Vesztergombi proved that the
linear discrepancy of a hypergraph H is bounded above by the hereditary
discrepancy of H, and conjectured a sharper bound that involves the number
of vertices in H.
In this talk we give a short proof of this conjecture for hypergraphs of
hereditary discrepancy 1. For hypergraphs of higher hereditary discrepancy
we give some partial results and propose a sharpening of the conjecture.

This is joint work with Ron Holzman.

**Title:** Split Closure and Intersection Cuts.

**Abstract:**
In the seventies, Balas introduced intersection cuts for a
Mixed Integer Program (MILP), and showed that these cuts can be obtained
by a closed form formula from a basis of the standard linear programming
relaxation. In the early nineties, Cook, Kannan and Schrijver introduced
the split closure of an MILP, and showed that the split closure is a
polyhedron. In this paper, we show that the split closure can be obtained
using only intersection cuts. We give two different proofs this result,
one geometric and one algebraic. Furthermore, the result is used to
provide a new proof of the fact that the split closure is a polyhedron.
Finally, we extend the result to more general two-term disjunctions.

This is joint work with Gerard Cournuejols and Yanjun Li

- Kent Andersen, Gerard Cournuejols, Yanjun Li: Split Closure and Intersection Cuts. (Postscript)

**Title:** Edge Colorings for a Class of Planar Multigraphs.

**Abstract:**
In this talk, we present a simple lower bound for the edge-chromatic
number of any multigraph, which is conjectured to be tight for any planar
graph. This talk focusses on the work of Odile Marcotte, who showed this
bound to be tight for all graphs excluding K_{3,3} minors and K_5 - e
minors.

**Title:** Generating Disjunctive Cuts for Mixed Integer Programs

**Abstract:**
Disjunctive cuts is a very broad class of cuts for mixed integer
programming. In general, any cut that can be derived from a disjunctive
argument can be considered a disjunctive cut. Here we consider
specifically cuts that are valid inequalities for some simple
disjunctive relaxation of the mixed integer program. Such a relaxation
can e.g. be obtained by relaxing the integrality condition on all but a
single variable. The lift-and-project procedure developed in the early
nineties is a systematic way to generate an optimal (in a specific
sense) disjunctive cut for a given disjunctive relaxation. It involves
solving a higher dimensional cut generating linear program (CGLP) and
has been developed for the simplest possible disjunctions; those
requiring that a simple variable be either zero or one.
In our work we consider the problem of efficiently generating
disjunctive cuts for any given disjunction. That is, once we are
presented with a disjunctive relaxation of a mixed integer program, how
can we efficiently generate one or more cuts that cuts off an optimal
solution to the LP relaxation? This problem naturally falls into two
cases: Two-term disjunctions, as those the original lift-and-project
procedure was designed to solve, and more general multiple-term
disjunctions.
For the two-term disjunctions we show how one can effectively reduced
the CGLP, but the main result is that we show a precise correspondence
between the lift-and-project cuts obtained from the CGLP and simple
disjunctive cuts from rows of the LP relaxation simplex tableau. The
implication is that lift-and-project cuts from the high dimensional CGLP
can be obtained directly from the LP relaxation. Furthermore, if
integrality on all variables are considered then this becomes a
correspondence between strengthened lift-and-project cuts and Gomory's
mixed integer cuts. Using this correspondence we present a procedure to
efficiently generate an optimal mixed integer Gomory cut (optimal in the
sense of the CGLP) through pivots in the simplex tableau of the LP
relaxation.
In the case of multiple-term disjunctions we present procedures that
provide an optimal solution to the high dimensional CGLP, by solving the
cut problem in the original space without recourse to the many auxiliary
variables present in the CGLP. Finally, we propose a procedure that
generates a set of disjunctive cuts for a given disjunction.

- This talk is the dissertation proposal for Michael Perregaard. The full text can be downloaded here.

**Title:** On graph irregularity strength.

**Abstract:**
An assignment of positive integer weights to the edges of a
simple graph $G$ is called irregular if the weighted degrees
of the vertices are all different. The irregularity strength,
$s(G)$, is the maximal weight, minimized over all irregular
assignments, and is set to $\infty$ if no such assignment is possible.
In this paper we show, that $s(G)\leq c_1 n/\d$, for
graphs with maximum degree $\D\le n^{1/2}$, and
$s(G)\leq c_2 (\log n) n/\d$, for graphs with $\D >n^{1/2}$, where
$c_1$ and $c_2$ are explicit constants.
These bounds substantially improve previously known results.
To prove the results, we are using a combination of
deterministic and probabilistic techniques.

**Title:** The Quickest Multicommodity Flow Problem
(or Techniques for Flows Over Time).

**Abstract:**
Standard network flows capture many aspects of routing problems
but miss one crucial element: flows occur over time. Ford and
Fulkerson recognized this omission when they introduced flows
over time. In this model, there is a transit time, or delay,
associated with each arc to address the fact that flow does
not travel instantaneously across the arc. Now, given
a time bound, it is possible to consider the flow-over-time
equivalents of standard network flow problems. This allows
for more accurate modelling of problems in
telecommunications, transportation, financial planning,
manufacturing, and logistics.
Ford and Fulkerson showed how to solve the maximum flow over time
problem by reducing it to a minimum cost flow problem.
However, the minimum cost flow over time problem is already
NP-hard. And, the obvious linear program formulation
of the multicommodity flow over time problem depends linearly
on the time bound, so is in general not of polynomial size.
Thus, problems in flows over time are inherently more difficult
than the standard flow problems.
We will introduce flows over time, review some of the
classical results, and describe a newly devised FPAS
for both the multicommodity flow over time problem and
the same problem with costs that is joint work with
Martin Skutella. Time permitting we may
touch upon further work.

**Title:** Non-clairvoyant Scheduling for Minimizing Mean Slowdown.

**Abstract:**
We consider the problem of scheduling jobs online non-clairvoyantly, that
is, when job sizes are not known. Our focus is on minimizing mean {\em
slowdown}, defined as the ratio of response time to the size of the job.
We use {\em resource augmentation} in terms of allowing a faster processor
to the online algorithm to make up for its lack of knowledge of job sizes.
We provide several upper and lower bounds for the problem.
For $\epsilon > 0$, our main result is an $O((1+\epsilon)\log_{1+\epsilon}
B)$-speed $O(\log^2_{1+\epsilon} B)$-competitive algorithm
for minimizing mean slowdown non-clairvoyantly, when $B$ is the ratio
between the largest and smallest job sizes. We also prove that any $k$-speed
algorithm incurs a slowdown of at least $\Omega(\log B/k)$, thus providing
evidence that our upper bound is non-trivial. We show that in the static
case (that is, when all jobs have the same arrival time), the Round Robin
algorithm is able to match this lower bound. A further set of lower bounds
motivates the need for bounded job sizes as well as resource augmentation for
any algorithm to minimize slowdown non-clairvoyantly.

This is joint work with N. Bansal, K. Dhamdhere, and A. Sinha.

**Title:** Chromatic polynomials and Tutte polynomials of homeomorphism
classes of graphs.

**Abstract:**
First we will discuss a polynomial which contains, as special
cases, the chromatic polynomial of all members of a given homeomorphism
class of graphs. Then we will discuss a related polynomial which
contains, as special cases, the Tutte polynomial of all members of a
given homeomorphism class of graphs. As a historical note, we will
discuss the earlier work of John Nagle, Department of Physics, CMU.

The is joint work with Ronald C. Read, University of Waterloo. Professor Read and the speaker published the following papers on this topic. (1) Discrete Math. 204 (1999), 337-356. (2) Discrete Math 243 (2002), 267-272.

- Prof. Whitehead presentation can be downloaded here.