.comment-link {margin-left:.6em;}

A Blogscursion into the World of Expanders

Friday, September 02, 2005

Eigenlectures

In the last two lectures we covered some very basic linear algebra we'll use to study expanders. We chose our topics mostly following Lecture 2 from the Linial-Wigderson course. Restricting to undirected graphs makes the linear algebra involved very simple (and beautiful, I must say) for us. Some of the things we saw are:

  • A graph G defines a linear transform, namely multiplication by its adjacency matrix A. This transform naturally corresponds to how a probability distribution on the nodes of the graph gets transformed in a single step of the (natural) random walk defined on G. (But now that we have a linear transform in our hands, we need not restrict ourselves to applying it to probability distributions. Any real vector is fair game.)

  • We'll consider undirected graphs. Then the adjacency matrix A is symmetric (and of course real). Then it can be shown1 that A=PTDP, where D is a diagonal matrix and P is an orthonormal matrix (all real). In plain English that means, in some (orthonormal) basis (defined by the rows of P), A is a transformation which simply scales each dimension independently. This change of basis is extremely useful (as we saw in the proof of the "Expander Mixing Lemma").

  • Rows of P are "eigenvectors" of A, with corresponding "eigenvalues" being the diagonal entries of D.2 Most of what we talked in the class about real symmetric matrices becomes fairly intuitive once we look at D instead of A. One of those intuitive things is the characterization of the spectrum using Rayleigh quotients. Since a Rayleigh quotient is defined geometrically (using angles, lengths etc., with no reference to any particular basis), and the same is true for eigenvalues, it is all right to think of D instead of A.

  • Writing A=PTDP is the same as the "spectral decomposition," A=ΣλiPi. Here λi is the i-th diagonal entry of D and Pi=viviT is the linear transformation which returns the component along vi, the i-th row of P.

  • For d-regular (connected, non-bipartite) graphs, we saw that λ0 = d, and |λi|<d for i>0.

  • We analyzed the eigenvalues and eigenvectors of our favorite examples, the clique (with self-loops) and the cycle. The clique has a single non-zero eigenvalue (=n). Then we saw that any d-regular graph has a d/n "component" of the clique in it. An ideal expander would behave as if it didn't have any othercomponents.

  • We also saw some relations between the spectral gap and the expansion of a graph. Kevin will talk more about this the week after the next.


______


1. The proof mentioned above uses induction and is simple, but is not quite "elementary," because it refers to the existence of at least one real eigenvalue for A. The short proof for that depends on the Fundamental Theorem of Algebra, which I'm sure everyone considers second nature, but I don't think is quite what an elementary proof for the above statement should contain. So, if anyone out there can give (pointers to) a simple(r) proof for the spectral decomposition theorem, but restricted to real symmetric matrices, that would be great.


2. The concept of eigenvalues/eigenvectors is not restricted to linear transforms defined by real symmetric matrices. A typical linear algebraic exposition would first introduce you to eigenvalues/eigenvectors for general linear transforms and then specialize it to real symmetric matrices. To keep our focus, and to keep things simple, we started directly with real symmetric matrices. But beware that the nice behavior described above does not hold in general.


ps: I'm having some very strange problems with the way Firefox is rendering things in my newly acquired powerbook (OS X 10.4). I hope things show up nicely in your browsers... update: fixed it!

1 Comments:

  • I complained above that the proof we had of spectral decomposition for real symmetric matrices was not elementary because it used the fact that a real symmetric matrix has at least one real eigenvalue (which in turn was proven using the fundamental theorem of algebra).

    Charles has sent me this really nifty proof for that, from "Section 17 of Gelfand's Lectures on Linear Algebra."


    Lemma: If B is a real symmetric matrix and <Bx,x> ≥ 0 for all x, then <Be,e> = 0 implies Be is the zero vector.

    Proof: For any vector h and any real number t,

    <B(e+th),(e+th)> = <Be,e> + t^2 <Bh,h> + t <Be,h> + t <B^T e,h> ≥ 0.

    But <Be,e> = 0, t^2 <Bh,h> ≥ 0 and B^T=B. So t <Be,h> ≥ 0. But if this holds for all t (positive and negative) then <Be,h> = 0. Now, if this holds for all h (say basis vectors) then Be is the zero vector.

    Theorem: If A is symmetric and x=e minimizes <Ax,x> among those (real) x with ||x|| = 1, then e is an eigenvector of A with eigenvalue <Ae,e>.

    Proof: Let λ=<Ae,e>. Note that we can apply the lemma with B = A-λI. So Be is the zero vector, and hence Ae=λe.

    (To complete our argument, it still needs to be shown that an x which attains the infimum for <Ax,x> exists. Restricted to ||x||=1, <Ax,x> is indeed bounded below. To keep things elementary, let's just take it for granted that then there is an x which minimizes it. Or would that be considered fishy?)

    By Blogger Manoj Prabhakaran, at 6:19 PM  

Post a Comment

<< Home