ACO seminar (1998-99)

The organizer in this semester is Bjarni Halldorsson ( If you have questions or suggestions about the Seminar, or want to be a speaker, please contact the organizer. The seminar is at Physical Plant Building most Thursdays from 4:30pm to 5:30pm.


TITLE: Fast Combinatorial Algorithms for Multicommodity Flow Problems
SPEAKER: Jochen Koenemann
WHEN: Wednesday Oct. 21st. 3:30 - 4:30
WHERE: Wean Hall 4601

In a multicommodity flow problem, we are given a graph with non-negative edge capacities c_e and k source/sink pairs of vertices (s_i,t_i). Each pair is associated with a commodity. For pair i, we wish to ship commodity i from source i to sink i. The individual commodity flows share the common edge capacities, i.e. the total amount of flow going through edge e must not exceed c_e. We developed a new simple and fast combinatorial approximation algorithm for several different optimization versions of the mentioned problem. In this talk, we will present the method for the special case of the maximum concurrent flow problem. Here, in addition to the previous definition, each source sink pair comes with a non-negative demand d_i. Our goal is to maximize the throughput \lambda, i.e. the percentage of each demand d_i that can be routed simultaneously. We use single commodity mincost flow computations to find a solution which obeys the capacity constraints and has throughput \lambda, which is provably close to the optimum.

TITLE: Extremal sets, codes, and computing
SPEAKER: Miklós Ruszinkó
WHEN: Thursday Jan 21st 4.30-5.30
WHERE: Physical Plant Building

Let T(r,n) denote the maximum number of subsets of an n-set satisfying the condition that no set is covered by the union of r others. Iproving results of P. Erdos, P. Frankl and Z. Füredi [1] we will give a new upper bound for the maximum size of such a family [2]. This question arose in information theory in early sixties, however, the gap between the upper and the lower bounds is still exponentially large. The connection of this question to information theory and to distributed graph coloring will be also discussed. In case we will have enough time, the geometric version [3] of this question will be also considered.

TITLE: The $0-1$ Knapsack Problem and some Isoperimetric Inequalities
SPEAKER: Rachel Rue
WHEN: Thursday Jan 28th 4.30-5.30
WHERE: Physical Plant Building

The $0-1$ knapsack inequality is $\sum_{i=1}^n a_i x_i \leq b$, $x_i\in\{0,1\}$. Suppose we want to generate points in the boolean cube satisfying the inequality, nearly uniformly at random. It is not known whether this can be done in time polynomial in $n$; the best result so far gives a $2^{\sqrt{n}}$ bound. This talk explores a strategy for solving the problem by giving an isoperimetric inequality for the knapsack space. We will present isoperimetric inequalities known for much simpler spaces, and prove a new combinatorial lemma which can be used to prove variations on these inequalities. The lemma: Let $H$ be an $n$-dimensional boolean hypercube with edge set $E$, and with a real weight $w(x)$ assigned to each point $x\in H$. Let $\mu$ be the average of the weights. Then $$\sum_{(x,y)\in E}|w(x)-w(y)|\geq\sum_{x\in H}|w(x)-\mu|.$$

TITLE: Bounds on the Chvatal rank of polytopes in the 0/1 cube
SPEAKER: Micheal Perregaard
WHEN: Thursday Feb 4th 4.30-5.30
WHERE: Physical Plant Building

Given a polytope P, the Chvatal-Gomory procedure computes iteratively the integer hull P_I of P. The Chvatal rank of P is the minimal number of iterations needed to obtain P_I. In this talk I consider only polytopes in the 0/1 cube, which are of particular interest in combinatorial optimization. I will show that the Chvatal rank of a polytope P in the 0/1 cube is at most 3n^2log n and prove an upper bound of n for the case where P_I is empty. The talk is based on work by A. Bockmayr, F. Eisenbrand, M. Hartmann and A. Schultz.

TITLE: Graph reconstruction from k-edge deleted sub-graphs
SPEAKER: Geoffrey Atkinson
WHEN: Thursday Feb 11th 4.30-5.30
WHERE: Physical Plant Building

Given a graph on n vertices and m edges, and all of the sub-graphs obtained by deleting one vertex, when is it possible to uniquely construct the original graphs from the sub-graphs? What about if you have all the sub-graphs obtained by deleting one edge of the original? I will give a brief summary of some results and open problems on these two questions and their variations, before focusing on a result of Godsil, Krasikov and Rodity which deals with reconstructing graphs from their k-edge deleted sub-graphs: G is k-edge reconstructible if 2^(m-k) > n! or if 2m > (n choose 2) + k

TITLE: Using Interior Point Methods for Robust Optimization
SPEAKER: Bjarni Halldórsson
WHEN: Thursday Feb 18th 4.30-5.30
WHERE: Physical Plant Building

We will give a brief introduction to Interior Point Methods and Semidefinite Programming and explore some resent results in Robust Optimization.We then present a problem of particular interest, some results towardssolving it and some of the remaining difficulties.

TITLE: Generalized Network Problems
SPEAKER: David Hartvigsen
WHEN: Thursday Feb. 25th 4.30-5.30
WHERE: Physical Plant Building 300

Abstract: In this talk we consider two classical network optimization problems: the max flow problem and the min cost spanning tree problem. We develop generalizations of each problem by allowing the addition of linear constraints. We then present a new algorithm for the generalized problems that exploits the structure of the underlying network problems.

TITLE: Constant approximation Algorithm for the k-median problem
SPEAKER: Ojas Parekh
WHEN: Thursday Mar. 4th 4.30-5.30
WHERE: Physical Plant Building 300

This talk will present the first constant factor approximation algorithm, due to Tardos and Shmoys [1], for the metric k-median problem. The k-median problem is one of the most well studied clustering problems, those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close to one another with respect to some measure. For the metric k-median problem we are given n points in a metric space, and we must select k of these points as cluster centers, assigning each remaining point j to a center to which it is closest. For each point j we incur a cost proportional to the distance between j and the center to which it is assigned. The objective is to select the k centers so as to minimize the sum of these assignment costs. The talk presents a 7-approximation algorithm where the best previously known result was a O(log(k)log(log(k)))-approximation algorithm.

[1] E. Tardos and D. Shmoys. An O(1)-approximation algoritm for the k-median problem. 1998.

Title: Crossing Numbers and Hard Erdos Problems in Discrete Geometry
SPEAKER: Jochen Konemann
WHEN: Thursday Mar. 18th 4.30-5.30
WHERE: Physical Plant Building 300

A drawing of a graph over a surface represents edges by curves such that edges do not pass through vertices and no three edges meet in in a common internal point. The crossing number of a graph G over a surface is the minimum number of edge crossings over all drawings. We will prove a theorem of Leighton, showing a lower bound for the crossing number for a graph G in terms of the number of its vertices and its edges. Afterwards, we will see that this theorem yields short proofs for a number of bounds in discrete plane geometry which were considered hard before: the number of incidences among points and lines, the maximum number of unit distances among n points and the minimum number of distinct distances among n points.

TITLE: Dynamically Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets
SPEAKER: Paul Komarek
WHEN: Thursday April 1st 4.30-5.30
WHERE: Physical Plant Building 300

Statistical inference from large datasets has become common practice in industry and academia. For instance a grocery store may wish to track customers' buying habits, or an astronomer may wish to analyze terabytes of data gathered by computerized telescopes. As the size of the datasets has grown, automated analysis of the data by machine learning algorithms has become increasingly attractive. However, the large size of the datasets also threatens the efficient implementation of these algorithms. Thus there is a need for dataset representations which are space-efficient and support fast, random access by machine learning algorithms. This presentation will briefly discuss some current datamining practices and mention some popular dataset representations. The majority of the talk will focus on a representation called AD-Trees, developed by Andrew Moore and Mary Soon Lee and recently extended by Andrew Moore and Paul Komarek. This extension is a dynamic version of AD-Trees which affords enormous flexibility and space-savings over the original structure, with an acceptable empirical performance trade-off. This is a work in progress.

TITLE: Optimal Signal Sets for Non-Gaussian Detectors
SPEAKER: Anthony Jose Kearsley
WHEN: Thursday April 15th 4.30-5.30
WHERE: Physical Plant Building 300

Identifying a maximally-separated set of signals is important in the design of modems. The notion of optimality is dependent on the model chosen to describe noise in the measurements; while some analytic results can be derived under the assumption of Gaussian noise, no such techniques are known for choosing signal sets in the non-Gaussian case. To obtain numerical solutions for non-Gaussian detectors, minimax problems are transformed into inequality constrained nonlinear programming problems, resulting in a novel formulation yielding problems with relatively few variables and many inequality constraints. Using a special Sequential Quadratic Programming algorithm optimal signal sets are obtained for a variety of noise distributions.

TITLE: Condition numbers for convex programming
SPEAKER: Javier Peña
WHEN: Thursday April 22nd 4.30-5.30
WHERE: Physical Plant Building

The condition number of a square matrix plays a crucial role in the theory of linear equations. On the one hand, it provides a measure of sensitivity of the solution to data perturbations. On the other hand, it serves as a complexity measure appropriate from the numerical analysis viewpoint. The notion of condition number can be extended to the more general context of convex programming. In this talk we show how the general condition number preserves fundamental features of traditional condition numbers. In addition, we discuss the relevance of condition numbers to properties of convex programs, especially in connection with interior-point methods.

TITLE: Quadratic convergence in interior-point methods
SPEAKER: Reha Tütüncü
WHEN: Thursday April 29th 4.30-5.30
WHERE: Physical Plant Building

One of the most desirable properties of an optimization algorithm is its ability to converge fast to a solution once the iterates are sufficiently close to such a point. For unconstrained or equality constrained convex optimization the quadratic convergence of Newton-based methods is a well-known example of this behavior. For {\em inequality} constrained problems, path-following variants of interior-point methods can achieve superlinear convergence by restricting their iterates to a neighborhood of the so-called central path of the feasible region. We will survey some of these results. Further, we will develop and analyze a new interior-point algorithm that achieves polynomial and quadratic convergence to an optimal solution, even in the presence of degeneracy. This is the first interior-point algorithm with quadratic convergence that does not impose any neigborhood restrictions on the iterates. The algorithm uses a potential-reduction approach and damped Newton search directions on multiplicative variants of standard potential functions.

Back to the ACO home page