.comment-link {margin-left:.6em;}

A Blogscursion into the World of Expanders

Friday, October 14, 2005

Summary of first few lectures

We defined an expander family as a familty of undirected, regular graphs on n vertices (for increasing n) such that the degree is constant and
  • the edge expansion (hG=min|S|≤1/2 |E(S,S')|/|S|) is at least a constant, OR
  • the spectral gap (d-λ1) is at least a constant.

(The equivalence follows from (d-λ1)/2 ≤ hG ≤ sqrt[2d(d-λ1)], proven later.)

A slightly stronger definition of spectral expansion requires d-λ to be at least a constant, where λ=max(|λ1|,|λn-1|). Then we have the following for an expander G (writing α for λ/d).


  • Optimal Expanders: If we allowed "fractional edges," a clique (with self-loops) with all edge weights d/n would be the best degree d expander (with λ=0). However, with "integral edges," λ ≥ 2 sqrt[d-1] - o(1). So called Ramanujan Graphs have λ = 2 sqrt[d-1]. So one can think of α being O(1/sqrt d).

  • Expander Mixing Lemma: For any cut (S,T) in G, number of edges across the cut is in the range (d/n)|S| |T| ± dα sqrt[|S| |T|]. (Note that the smaller the α the closer this is to the value for the "d/n-clique.")

    An application of this lemma is in a simple "deterministic amplification" of BPP algorithms: the algorithm is run not just on one random-tape, but on all its d "neighbors" too -- neighbors in an arbitrary expander with the space of all random-tapes as the vertex set -- driving the error probability down to O(α2), starting from say 1/4.

  • Rapid Mixing: Starting from a distribution p on the n vertices, a t-step random walk on G results in a new distribution Mtp exponentially close to the uniform distribution u. More precisely, the L2-norm of the difference Mtp - u is at most αt (and hence the L1-norm, which measures twice the statistical difference, is at most (sqrt n)αt).

  • Sampling by Random Walks: Probability that a t-step random walk is confined to a set B of size βn is between (β-2α)t and (β+α)t. (Note that the smaller the α the closer this is to βt corresponding to the random walk on the clique.)

    The upperbound lets one amplify a BPP algorithm to reduce the error probability exponentially in t using only t log d extra random bits than the original BPP algorithm (if its error probability is a sufficiently small constant to begin with).

    The two bounds together can be used to deterministically sample a graph such that the relative size of the max-clique is preserved. This lets one reduce the approximation factor of an algorithm for approximating the size of a largest clique, by first using a graph exponentiation which preserves the relative log-size of the max-clique and then sampling it down as above to a polynomial sized graph in which the original approximation algorthm is then run. (Such an improvement in approximation factor can be used to amplify hardness of approximation results.)

0 Comments:

Post a Comment

<< Home