Last year's seminar talks are summarized here.
There has recently been some success in designing polynomial time approximation schemes for dense optimisations problems such as MAX-CUT. Intuitively a problem is dense if its optimal value is within a constant factor of the maximum for the problem class. We survey some of these results.
Circular decomposable metrics (CDM) are sums of cut metrics that satisfy a circularity condition. A number of combinatorial optimization problems, including the traveling salesman problem, are easily solved if the underlying cost matrix represents a CDM. We give a linear time algorithm for recognizing CDMs and show that they are identical to another class of metrics: the Kalmanson metric.
Assume 2 ordered chains of elements are given and the goal is to merge them by means of pairwise comparisons. This procedure is used as a step in many sorting algorithms. Let M(m,n) denote the minimal number of pairwise comparisons which will suffice to merge 2 chains of lengths m and n in the worst case. Upper bounds for M(m,n) correspond to good merging algorithms, while lower bounds set limits for any merging algorithm. In the first part of the talk we give an outline of the study of the function M(m,n).
In the second part of the talk we shall consider the parallel merging problem, where non-overlapping comparisons are allowed to be united into stages and performed simultaneously. In this case we are interested in P(m,n) the minimal number of stages which will suffice to merge 2 chains of lengths m and n in the worst case. A new lower bound for P(m,n) will be proved. Combining it with the upper bound obtained by K.~E.~Batcher (1968), we obtain the following final result: P(m,n)=ceil(log(m+n)) for all m, n.
We will describe a new algorithm to find a ``regular'' partition of a graph (which I will define). Using this, we are able to find approximate solutions to the Max Cut, Multi way cut and in fact any MAX-SNP problem in dense graphs in CONSTANT time (by a Monte Carlo algorithm).
Suppose that we have a graph, with a source s and a sink t, and with capacities on the edges. In the classical network flow problem we want to pack s-t paths, so that the number of times an edge is used is at most its capacity. If we must not pack the paths too closely then we may want to pack caterpillars, where a caterpillar consists of the edges in an s-t path together with the edges incident with internal nodes of the path. We shall discuss this and related problems, for general and for planar graphs and directed graphs.
In this talk, I will present results about data structures for use in N-body simulation. I will concentrate on the spatial decomposition tree used in particle-cluster force evaluation algorithms such as the Barnes-Hut algorithm. A k-d tree is asymptotically inferior to a spatially balanced tree. The worst case complexity of the force evaluation algorithm using a k-d tree is Theta(n log³n log L) compared with Theta(n log L) for an oct-tree. (L is the separation ratio of the set of points.) I will also discuss methods for improving the constant factor of the algorithm, and present several methods which improve over the standard oct-tree decomposition. The results are all directly applicable to practical implementations of N-body algorithms.
We propose new geometric models and algorithms as tools for solving problems in machine learning and for information retrieval. Our success so far is two-fold: an efficient algorithm for learning linear threshold functions (perceptrons) which is tolerant to classification noise, and an explanation of the success of the information retrieval technique known as Latent Semantic Indexing along with a provable speed-up of the same. Our proposal is to use similar techniques to understand closely related open problems such as learning the intersection of half-spaces.
Thesis Committee: Avrim Blum, Alan Frieze, Ravi Kannan, László Lovász.
In this talk I will address some problems arising in computational molecular biology on which I have worked in the past few years. After an introductory review of some basic concepts of biology, I will address the following problems:
We outline proofs of bounds on the solutions of the 2-matching
and TSP problems. One of the reasons these bounds are important
is that they shed light on the longstanding conjecture that the minimum
cost Hamilton cycle is within 4/3 the cost of the optimum subtour solution
when the costs satisfy the triangle inequality.
This is joint work with Sylvia Boyd.
In this dissertation we study the optimization problem:
(P) Minimize cx such that x in K and Ax = b
where K is a closed convex cone, A is a matrix, and b and c are vectors of approppriate dimension. Our problem is called a cone programming problem, or a cone-program. In fact, every convex programming problem can be formulated as a cone-program, thus (P) models essentially all efficiently (i.e. in polynomial time) solvable optimization problems.
The most fundamental class of cone-programs is linear programming, when K is the nonnegative orthant. Another important class is semidefinite programming (SDP), that gained a lot of attention recently. Some other structured convex programs, for which the results of this work apply, are:
- Convex Quadratic Programming.
- Matrix Approximation Problems.
- The optimization of spectral functions. Spectral functions are functions defined on symmetric matrices, that depend only on the eigenvalues of the matrix.
We develop a self-contained theory to describe
- The facial structure, in particular basic solutions of cone (and convex) programs.
- Feasible directions.
- Simplex-type methods.
Our results are generalizations of the corresponding fundamental results for linear programming, and are intuitively plausible to geometric intuition. Nevertheless, with a few rare exceptions they have not been given in the mathematical programming literature. As a corollary, we obtain results explaining the inherent nonsmoothness (i.e. nondifferentiability) of several structured convex programming problems.
We present a new, inference logic based, approach to perform sensitivity analysis for mixed integer programming problems.
This is joint work with Prof. John Hooker.