Matrices and graphs

To study structure,
tear away the flesh till
only the bone shows.

This haiku came to my mind yesterday while I was walking in the streets thinking about matrices and graphs. I am supposed to give an introductory lecture about them for undergraduate students in a few weeks, and it seemed like a good reason to finally write a post about mathematics.

 

1. Matrices and their graphs. Our goal for today is to introduce a connection between matrices and graphs and to illustrate the strength of graph theoretic methods in linear algebra and vice versa. What is shown here is only the tip of the iceberg. These methods turned out to be very fruitful and they have many applications inside and outside mathematics. My favourite book on this subject is [1], I strongly recommend it for further reading.

Definition 1. Let A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n} be a real matrix. The directed graph (or digraph in short) of A is a directed and weighted graph D(A) = (V, E, W) , where V denotes its vertices, E its edges, W its weights, and they are given by
(i) V = \{ 1,2, \dots, n \} ,
(ii) (i,j) \in E for all (i,j) \in V \times V ,
(iii) W: E \to \mathbb{R} , W(i,j) = a_{i,j} .

Albeit D(A) is a complete graph, if an edge has zero weight, it can be omitted in a sense. Let S = v_1 v_2 \dots v_k be a walk on D(A) containing the vertices v_1, v_2, \dots, v_k . Its weight is defined as

W(S) = \prod_{l=1}^{k-1} W(v_k, v_{k+1}).

The weight of an edge e can be thought of as some kind of cost for traversing e and similarly, the weight of a walk S is the total cost for traversing S . By defining W(S) as we did, the powers of A can be interpreted in a nice way.

Proposition 1. Let A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n} be a square matrix and let A^k = (a_{i,j}^{(k)})_{i,j=1}^{n} denote its k -th power. Then

a_{i,j}^{(k)} = \sum_{S \in \Gamma_{i,j,k}} W(S),

where

\Gamma_{i,j,k} = \{ S: S \textnormal{ is a path of length } k \textnormal{ connecting } i \textnormal{ and } j \}.

Proof. The proof goes by induction with respect to k . For k = 2 , the situation looks something like this.

 

matrix_multiplication
2 step long walks between vertices i and j

The weight of a 2 step long walk between i and j through l is always a_{i,l} a_{l,j} , therefore the sum of all weight equals to \sum_{l=1}^{n} a_{i,l}a_{l,j} , which equals to a_{i,j}^{(2)} .

If the statement holds for k , then the k + 1 case follows using the same argument noticing that every k + 1 step long walk can be decomposed to a k step long walk and a 1 step long walk. \Box

2. Permutation graphs. A large role is played by the so-called permutation matrices.

Definition 2. A P \in \mathbb{R}^{n \times n} matrix is a permutation matrix if its elements are 0 -s and 1 -s arranged in a way such that each row and each column contains precisely one 1 .

If \pi \in S_n is a permutation then there is a permutation matrix P_\pi such that the graph of P_\pi is the graph of \pi as a permutation. Of course, this correspondence is an isomorphism between the group of permutations and the group of permutation matrices. In this case, P_\pi can be written as P_\pi = (\delta_{i, \pi(j)})_{i,j=1}^{n} , where \delta_{i,j} is the Kronecker delta symbol defined as

\delta_{i,j} =\begin{cases}1 & \textnormal{if } i = j \\0 & \textnormal{otherwise}.\end{cases}

Multiplying a matrix with a permutation matrix yields a matrix with same elements but its rows and columns permuted, that is

A P_\pi = (a_{i \pi(j)})_{i,j=1}^{n}

and

P_\pi A = (a_{\pi(i) j})_{i,j=1}^{n}

holds for all A = (a_{i,j})_{i,j=1}^{n} . The properties of permutation matrices are summarized in the following proposition.

Proposition 2. Let P \in \mathbb{R}^{n \times n} be a permutation matrix. Then
(i) P^T P = P P^T = I ,
(ii) P can be factored into the product of matrices P_{i,j} , where P_{i,j} is obtained from the identity matrix I by transposing its i -th row with its j -th row,
(iii) and for all A \in \mathbb{R}^{n \times n} , the directed graph D(A) is isomorphic with D(P^T A P) .

Proof. (i) This identity can be calculated easily by using the identities describing AP_\pi and P_\pi A , or by noticing that if \pi \in S_n , then P_{\pi}^{T} = P_{\pi^{-1}} .
(ii) Since the matrix P_{i,j} corresponds to the transposition \begin{pmatrix} i & j \end{pmatrix} , this statement is immediate. (iii) Because P can be factored into the product of matrices P_{i,j} , it is enough to show that D(P_{i,j}^{T} A P_{i,j}) is isomorphic to D(A) . First note that D(A P_{i,j}) can be obtained by selecting every edge in D(A) going into i and redirecting them to j and vice versa.

permutation_conjugation_1
Before
permutation_conjugation_2
After

Similarly, D(P_{i,j}^{T} A) is obtained from D(A) by selecting every edge going out from i and repositioning them such that they are going out from j and vica versa. Combining these two observations, it is clear that D(P_{i,j}^{T} A P_{i,j}) can be obtained from D(A) by simply switching the labels for the vertices i and j . \Box

3. Nonnegative matrices. Now we direct our attention to nonnegative and positive matrices. A matrix is nonnegative (positive) if it is nonnegative (positive) elementwise. Many important results were proven by Oskar Perron in the beginning of the 20th century for positive matrices and some of them were extended to nonnegative matrices by Georg Frobenius. Now these results are known as ”Perron-Frobenius theory” and they play a very important role in applications, for example Markov chains, online search engines, etc. Our aim is to show that every nonnegative matrix can represented in a so-called Frobenius normal form, which is quite useful in applications.

Definition 3. Let A \in \mathbb{R}^{n \times n} be a nonnegative matrix.
(i) A is reducible if there is a permutation matrix P such that

P^T A P = \begin{pmatrix} B & C \\ 0 & D \end{pmatrix},

where B \in \mathbb{R}^{p \times p} , C \in \mathbb{R}^{p \times (n - p)} and D \in \mathbb{R}^{(n-p) \times (n-p)} .
(ii) A is irreducible if it is not reducible.

The main question is how can we describe the structure of such matrices with the aid of graph theory?

Definition 4. Let \mathcal{G} be a directed graph and let u and v two of its vertices.
(i) u and v are weakly connected if there is a directed path from u to v .
(ii) u and v are strongly connected if there is a directed path from u to v and there is a directed path from v to u .
(iii) The equivalence classes of the ”u and v are strongly connected” equivalence relation are called strongly connected components of \mathcal{G} .
(iv) \mathcal{G} is strongly connected if it has only one strongly connected component.

It turns out that Definition 3 and Definition 4 are strongly connected. (Both in literal and in figurative way.)

Proposition 3. A nonnegative matrix A \in \mathbb{R}^{n \times n} is irreducible if and only if its directed graph is strongly connected. Equivalently, A is reducible if and only if its graph is not strongly connected.

Proof. We shall show the latter statement. Suppose that A is irreducible and for some permutation matrix P we have

P^T A P = \begin{pmatrix} B & C \\ 0 & D \end{pmatrix}, \quad B \in \mathbb{R}^{p \times p}, C \in \mathbb{R}^{p \times (n - p)}, D \in \mathbb{R}^{(n-p) \times (n-p)}.

In this case, the vertices \{1,2,\dots, p\} in $ D(P^T A P) $ correspond to the first p row of P^T A P and the remaining n-p vertices correspond to the final rows. From the structure of P^T A P it can be seen that there are no edges going from \{ p+1, p+2, \dots, n \} to \{ 1, 2, \dots, p \} .

reducible_matrix_graph

This graph is clearly not strongly connected, since there are no edges going from \{ p+1, p+2, \dots, n \} to \{1,2,\dots, p\} . The other direction goes the same way. That is, if D(A) is something like in the above picture, than the vertices on the connected part should be labeled with \{ 1,2,\dots,p \} and the remaining part is labeled with the rest. \Box

Now we are ready to prove the main theorem which guarantees the existence of the so-called Frobenius normal form for nonnegative matices.

Theorem 1. (Frobenius normal form) Let A \in \mathbb{R}^{n \times n} be a nonnegative matrix. Then there exists a permutation matrix P such that

P^T A P = \begin{pmatrix} A_1 & A_{1,2} & A_{1,3} & \dots & A_{1,k} \\ 0 & A_2 & A_{2,1} & \dots & A_{2,k} \\ 0 & 0 & A_3 & \dots & A_{3,k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & \dots & A_k \end{pmatrix},

where A_1, \dots, A_k are square irreducible matrices. This form is called the Frobenius normal form.

Proof. This proof is so easy with the concepts we have developed so far that it is much easier to demonstrate on an example rather than give a formal proof. Consider the graph D(A) and its strongly connected components. For example, it looks like this.

frobenius_normal_form

The components are ordered in a way such that for all i < j , there are no edges going from vertices corresponding to A_j into vertices corresponding to A_i . Labeling the vertices in a way that it preserves this order yields a matrix which is in the desired form. It is also clear that A_k is irreducible, since its graph is strongly connected. \Box

4. The spectrum of graphs.

Definition 5. Let \mathcal{G} be a simple graph with vertices V = \{1, 2, \dots, n \} . Its adjacency matrix is defined as the matrix A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n} , where a_{i,j} is given by

a_{i,j} = \begin{cases} 1 & \textnormal{if there is an edge in } \mathcal{G} \textnormal{ connecting } i \textnormal{ and } j \\ 0 & \textnormal{otherwise}. \end{cases}

The set

\sigma(\mathcal{G}) = \{ \lambda: \lambda \textnormal{ is an eigenvalue of } A \}

is called the spectrum of the graph.

Note that since an adjacency matrix is always symmetric, the spectrum of a graph is a subset of the real numbers. Two graphs \mathcal{G}_1 and \mathcal{G}_2 are said to be isospectral if \sigma(\mathcal{G}_1) = \sigma(\mathcal{G}_2) , that is, they have the same spectrum. If two graphs have the same spectrum it is not necessarily true that they are isomorphic, for example the two graphs

isospectral

are isospectral but not isomorphic. Although it is not obvious that the spectrum of a graph contains useful information, it is very much so, as the following theorem illustrates.

Theorem 2. The graph \mathcal{G} is bipartite (that is, its chromatic number is 2 ) if and only if for every eigenvalue \lambda of \mathcal{G} , -\lambda is also an eigenvalue.

Proof. We only show one direction. Suppose that \mathcal{G} is bipartite. Then its adjacency matrix A looks like

A = \begin{pmatrix} 0 & B \\ B^T & 0 \end{pmatrix}

for some (not necessarily square) matrix B \in \mathbb{R}^{k \times l} . If \lambda is an eigenvalue of A with eigenvector x , then x can be split up in two blocks

x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad x_1 \in \mathbb{R}^k, x_2 \in \mathbb{R}^{n - k}.

It is easy to see that the vector

\tilde{x} = \begin{pmatrix} x_1 \\ -x_2 \end{pmatrix}

is an eigenvector of A with the eigenvalue -\lambda .

The other direction is more difficult. It utilizes the theory developed by Perron and Frobenius. Although it is not complicated, it requires a lengthy buildup. For details, see [3], Chapter 11 Problem 19. \Box

As one can easily see, there are only finitely many subsets in \mathbb{R} which can be obtained as a spectrum for some graph with n vertices. As such, one would expect (at least I did) that isospectral graphs are not that common. The following theorem says exactly the opposite.

Theorem 3. Let \mathcal{T}_n denote the set of trees with n vertices and define the set

\mathcal{I}_n = \{ T \in \mathcal{T}_n: \textnormal{ there is a } T^* \in \mathcal{T}_n \textnormal{ which is isospectral with } T \}.

Then

\lim_{n \to \infty} \frac{|\mathcal{I}_n|}{|\mathcal{T}_n|} = 1.

We shall not prove this theorem here, the proof can be found in the conference proceedings [2]. The main idea of the proof is that certain subgraphs of trees can be replaced with a different subgraph without changing the spectrum.

References.

[1] R. A. Brualdi and D. Cvetkovic, A Combinatorial Approach to Matrix Theory and Its Applications, Discrete Mathematics and Its Applications, Chapman & Hall, 2009

[2] F. Harary (editor), New Directions in the Theory of Graphs, New York, Academic Press, 1973

[3] L. Lovász, Combinatorial Problems and Exercises, 2nd edition, North-Holland, 1992

Author: Tivadar Danka

Applied Pure Mathematician

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s