Jekyll2019-08-08T19:38:50+00:00https://cosmic-cortex.github.io/feed.xmlTivadar DankaPersonal homepageTivadar Dankatheodore.danka@gmail.comMatrices and Graphs2015-11-19T00:00:00+00:002015-11-19T00:00:00+00:00https://cosmic-cortex.github.io/matrices-and-graphs<p>To study structure,<br />
tear away the flesh till<br />
only the bone shows.</p>
<p>This haiku came to my mind yesterday while I was walking in the streets thinking about matrices and graphs. I am supposed to give an introductory lecture about them for undergraduate students in a few weeks, and it seemed like a good reason to finally write a post about mathematics.
<!--more--></p>
<p><strong>1. Matrices and their graphs. </strong>Our goal for today is to introduce a connection between matrices and graphs and to illustrate the strength of graph theoretic methods in linear algebra and vice versa. What is shown here is only the tip of the iceberg. These methods turned out to be very fruitful and they have many applications inside and outside mathematics. My favourite book on this subject is [1], I strongly recommend it for further reading.</p>
<p><strong>Definition 1.</strong> Let <script type="math/tex">A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n}</script> be a real matrix. The directed graph (or digraph in short) of <script type="math/tex">A</script> is a directed and weighted graph <script type="math/tex">D(A) = (V, E, W)</script>, where <script type="math/tex">V</script> denotes its vertices, <script type="math/tex">E</script> its edges, <script type="math/tex">W</script> its weights, and they are given by<br />
(i) <script type="math/tex">V = \{ 1,2, \dots, n \}</script>,<br />
(ii) <script type="math/tex">(i,j) \in E</script> for all <script type="math/tex">(i,j) \in V \times V</script>,<br />
(iii) <script type="math/tex">W: E \to \mathbb{R}</script>, <script type="math/tex">W(i,j) = a_{i,j}</script>.</p>
<p>Albeit <script type="math/tex">D(A)</script> is a complete graph, if an edge has zero weight, it can be omitted in a sense. Let <script type="math/tex">S = v_1 v_2 \dots v_k</script> be a walk on <script type="math/tex">D(A)</script> containing the vertices <script type="math/tex">v_1, v_2, \dots, v_k</script>. Its weight is defined as</p>
<p style="text-align: center;"> $$ W(S) = \prod_{l=1}^{k-1} W(v_k, v_{k+1}).$$</p>
<p>The weight of an edge <script type="math/tex">e</script> can be thought of as some kind of cost for traversing <script type="math/tex">e</script> and similarly, the weight of a walk <script type="math/tex">S</script> is the total cost for traversing <script type="math/tex">S</script>. By defining <script type="math/tex">W(S)</script> as we did, the powers of <script type="math/tex">A</script> can be interpreted in a nice way.</p>
<p><strong>Proposition 1.</strong> Let <script type="math/tex">A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n}</script> be a square matrix and let <script type="math/tex">A^k = (a_{i,j}^{(k)})_{i,j=1}^{n}</script> denote its <script type="math/tex">k</script>-th power. Then</p>
<p style="text-align: center;">$$ a_{i,j}^{(k)} = \sum_{S \in \Gamma_{i,j,k}} W(S), $$</p>
<p>where</p>
<p style="text-align: center;">$$ \Gamma_{i,j,k} = \{ S: S \text{ is a path of length } k \text{ connecting } i \text{ and } j \}. $$</p>
<p><em>Proof.</em> The proof goes by induction with respect to <script type="math/tex">k</script>. For <script type="math/tex">k = 2</script>, the situation looks something like this.</p>
<figure>
<img src="https://nonemptyspaces.files.wordpress.com/2015/11/matrix_multiplication.jpg" alt="matrix_multiplication" style="display: block; margin-left: auto; margin-right: auto;" />
<figcaption> 2 step long walks between vertices <em> i </em> and <em> j </em> </figcaption>
</figure>
<p>The weight of a <script type="math/tex">2</script> step long walk between <script type="math/tex">i</script> and <script type="math/tex">j</script> through <script type="math/tex">l</script> is always <script type="math/tex">a_{i,l} a_{l,j}</script>, therefore the sum of all weight equals to <script type="math/tex">\sum_{l=1}^{n} a_{i,l}a_{l,j}</script>, which equals to <script type="math/tex">a_{i,j}^{(2)}</script>.</p>
<p>If the statement holds for <script type="math/tex">k</script>, then the <script type="math/tex">k + 1</script> case follows using the same argument noticing that every <script type="math/tex">k + 1</script> step long walk can be decomposed to a <script type="math/tex">k</script> step long walk and a <script type="math/tex">1</script> step long walk. <script type="math/tex">\Box</script></p>
<p><strong>2. Permutation graphs. </strong>A large role is played by the so-called permutation matrices.</p>
<p><strong>Definition 2. </strong>A <script type="math/tex">P \in \mathbb{R}^{n \times n}</script> matrix is a permutation matrix if its elements are <script type="math/tex">0</script>-s and <script type="math/tex">1</script>-s arranged in a way such that each row and each column contains precisely one <script type="math/tex">1</script>.</p>
<p>If <script type="math/tex">\pi \in S_n</script> is a permutation then there is a permutation matrix <script type="math/tex">P_\pi</script> such that the graph of <script type="math/tex">P_\pi</script> is the graph of <script type="math/tex">\pi</script> as a permutation. Of course, this correspondence is an isomorphism between the group of permutations and the group of permutation matrices. In this case, <script type="math/tex">P_\pi</script> can be written as <script type="math/tex">P_\pi = (\delta_{i, \pi(j)})_{i,j=1}^{n}</script>, where <script type="math/tex">\delta_{i,j}</script> is the Kronecker delta symbol defined as</p>
<p style="text-align: center;">$$ \delta_{i,j} =\begin{cases}1 & \text{if } i = j \\0 & \text{otherwise}.\end{cases} $$</p>
<p>Multiplying a matrix with a permutation matrix yields a matrix with same elements but its rows and columns permuted, that is</p>
<p style="text-align: center;">$$ A P_\pi = (a_{i \pi(j)})_{i,j=1}^{n} $$</p>
<p>and</p>
<p style="text-align: center;">$$ P_\pi A = (a_{\pi(i) j})_{i,j=1}^{n} $$</p>
<p>holds for all <script type="math/tex">A = (a_{i,j})_{i,j=1}^{n}</script>. The properties of permutation matrices are summarized in the following proposition.</p>
<p><strong>Proposition 2.</strong> Let <script type="math/tex">P \in \mathbb{R}^{n \times n}</script> be a permutation matrix. Then<br />
(i) <script type="math/tex">P^T P = P P^T = I</script>,<br />
(ii) <script type="math/tex">P</script> can be factored into the product of matrices <script type="math/tex">P_{i,j}</script>, where <script type="math/tex">P_{i,j}</script> is obtained from the identity matrix <script type="math/tex">I</script> by transposing its <script type="math/tex">i</script>-th row with its <script type="math/tex">j</script>-th row,<br />
(iii) and for all <script type="math/tex">A \in \mathbb{R}^{n \times n}</script>, the directed graph <script type="math/tex">D(A)</script> is isomorphic with <script type="math/tex">D(P^T A P)</script>.</p>
<p><em>Proof. </em>(i) This identity can be calculated easily by using the identities describing <script type="math/tex">AP_\pi</script> and <script type="math/tex">P_\pi A</script>, or by noticing that if <script type="math/tex">\pi \in S_n</script>, then <script type="math/tex">P_{\pi}^{T} = P_{\pi^{-1}}</script>.<br />
(ii) Since the matrix <script type="math/tex">P_{i,j}</script> corresponds to the transposition <script type="math/tex">% <![CDATA[
\begin{pmatrix} i & j \end{pmatrix} %]]></script>, this statement is immediate.<br />
(iii) Because <script type="math/tex">P</script> can be factored into the product of matrices <script type="math/tex">P_{i,j}</script>, it is enough to show that <script type="math/tex">D(P_{i,j}^{T} A P_{i,j})</script> is isomorphic to <script type="math/tex">D(A)</script>. First note that <script type="math/tex">D(A P_{i,j})</script> can be obtained by selecting every edge in <script type="math/tex">D(A)</script> going <em>into</em> <script type="math/tex">i</script> and redirecting them to <script type="math/tex">j</script> and vice versa.</p>
<figure>
<img src="https://nonemptyspaces.files.wordpress.com/2015/11/permutation_conjugation_1.jpg" alt="permutation_conjugation_1" style="display: block; margin-left: auto; margin-right: auto;" />
<figcaption> Before </figcaption>
</figure>
<figure>
<img src="https://nonemptyspaces.files.wordpress.com/2015/11/permutation_conjugation_2.jpg" alt="permutation_conjugation_2" style="display: block; margin-left: auto; margin-right: auto;" />
<figcaption> After </figcaption>
</figure>
<p>Similarly, <script type="math/tex">D(P_{i,j}^{T} A)</script> is obtained from <script type="math/tex">D(A)</script> by selecting every edge going <em>out</em> from <script type="math/tex">i</script> and repositioning them such that they are going out from <script type="math/tex">j</script> and vica versa. Combining these two observations, it is clear that <script type="math/tex">D(P_{i,j}^{T} A P_{i,j})</script> can be obtained from <script type="math/tex">D(A)</script> by simply switching the labels for the vertices <script type="math/tex">i</script> and <script type="math/tex">j</script>. <script type="math/tex">\Box</script></p>
<p><strong>3. Nonnegative matrices. </strong>Now we direct our attention to nonnegative and positive matrices. A matrix is nonnegative (positive) if it is nonnegative (positive) elementwise. Many important results were proven by Oskar Perron in the beginning of the <em>20</em>th century for positive matrices and some of them were extended to nonnegative matrices by Georg Frobenius. Now these results are known as ‘‘Perron-Frobenius theory’’ and they play a very important role in applications, for example Markov chains, online search engines, etc. Our aim is to show that every nonnegative matrix can represented in a so-called Frobenius normal form, which is quite useful in applications.</p>
<p><strong>Definition 3. </strong>Let <script type="math/tex">A \in \mathbb{R}^{n \times n}</script> be a nonnegative matrix.<br />
(i) <script type="math/tex">A</script> is reducible if there is a permutation matrix <script type="math/tex">P</script> such that</p>
<p style="text-align: center;">$$ P^T A P = \begin{pmatrix} B & C \\ 0 & D \end{pmatrix}, $$</p>
<p>where <script type="math/tex">B \in \mathbb{R}^{p \times p}</script>, <script type="math/tex">C \in \mathbb{R}^{p \times (n - p)}</script> and <script type="math/tex">D \in \mathbb{R}^{(n-p) \times (n-p)}</script>.<br />
(ii) <script type="math/tex">A</script> is irreducible if it is not reducible.</p>
<p>The main question is how can we describe the structure of such matrices with the aid of graph theory?</p>
<p><strong>Definition 4. </strong>Let <script type="math/tex">\mathcal{G}</script> be a directed graph and let <script type="math/tex">u</script> and <script type="math/tex">v</script> be two of its vertices.<br />
(i) <script type="math/tex">u</script> and <script type="math/tex">v</script> are <em>weakly connected</em> if there is a directed path from <script type="math/tex">u</script> to <script type="math/tex">v</script>.<br />
(ii) <script type="math/tex">u</script> and <script type="math/tex">v</script> are <em>strongly connected</em> if there is a directed path from <script type="math/tex">u</script> to <script type="math/tex">v</script> and there is a directed path from <script type="math/tex">v</script> to <script type="math/tex">u</script>.<br />
(iii) The equivalence classes of the ‘’<script type="math/tex">u</script> and <script type="math/tex">v</script> are strongly connected’’ equivalence relation are called <em>strongly connected components </em>of <script type="math/tex">\mathcal{G}</script>.<br />
(iv) <script type="math/tex">\mathcal{G}</script> is <em>strongly connected</em> if it has only one strongly connected component.</p>
<p>It turns out that Definition 3 and Definition 4 are strongly connected. (Both in literal and in figurative way.)</p>
<p><strong>Proposition 3. </strong>A nonnegative matrix <script type="math/tex">A \in \mathbb{R}^{n \times n}</script> is irreducible if and only if its directed graph is strongly connected. Equivalently, <script type="math/tex">A</script> is reducible if and only if its graph is not strongly connected.</p>
<p><em>Proof. </em>We shall show the latter statement. Suppose that <script type="math/tex">A</script> is irreducible and for some permutation matrix <script type="math/tex">P</script> we have</p>
<p style="text-align: center;">$$ P^T A P = \begin{pmatrix} B & C \\ 0 & D \end{pmatrix}, \quad B \in \mathbb{R}^{p \times p}, C \in \mathbb{R}^{p \times (n - p)}, D \in \mathbb{R}^{(n-p) \times (n-p)}. $$</p>
<p>In this case, the vertices <script type="math/tex">\{1,2,\dots, p\}</script> in <script type="math/tex">D(P^T A P)</script> correspond to the first <script type="math/tex">p</script> row of <script type="math/tex">P^T A P</script> and the remaining <script type="math/tex">n-p</script> vertices correspond to the final rows. From the structure of <script type="math/tex">P^T A P</script> it can be seen that there are no edges going from <script type="math/tex">\{ p+1, p+2, \dots, n \}</script> to <script type="math/tex">\{ 1, 2, \dots, p \}</script>.</p>
<p><img src="https://nonemptyspaces.files.wordpress.com/2015/11/reducible_matrix_graph.jpg" alt="reducible_matrix_graph" style="display: block; margin-left: auto; margin-right: auto;" /></p>
<p>This graph is clearly not strongly connected, since there are no edges going from <script type="math/tex">\{ p+1, p+2, \dots, n \}</script> to <script type="math/tex">\{1,2,\dots, p\}</script>. The other direction goes the same way. That is, if <script type="math/tex">D(A)</script> is something like in the above picture, than the vertices on the connected part should be labeled with <script type="math/tex">\{ 1,2,\dots,p \}</script> and the remaining part is labeled with the rest. <script type="math/tex">\Box</script></p>
<p>Now we are ready to prove the main theorem which guarantees the existence of the so-called Frobenius normal form for nonnegative matices.</p>
<p><strong>Theorem 1. </strong>(Frobenius normal form) Let <script type="math/tex">A \in \mathbb{R}^{n \times n}</script> be a nonnegative matrix. Then there exists a permutation matrix <script type="math/tex">P</script> such that</p>
<p style="text-align: center;">$$ P^T A P = \begin{pmatrix} A_1 & A_{1,2} & A_{1,3} & \dots & A_{1,k} \\ 0 & A_2 & A_{2,1} & \dots & A_{2,k} \\ 0 & 0 & A_3 & \dots & A_{3,k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & \dots & A_k \end{pmatrix}, $$</p>
<p>where <script type="math/tex">A_1, \dots, A_k</script> are square irreducible matrices. This form is called the Frobenius normal form.</p>
<p><em>Proof.</em> This proof is so easy with the concepts we have developed so far that it is much easier to demonstrate on an example rather than give a formal proof. Consider the graph <script type="math/tex">D(A)</script> and its strongly connected components. For example, it looks like this.</p>
<p><img src="https://nonemptyspaces.files.wordpress.com/2015/11/frobenius_normal_form.jpg" alt="frobenius_normal_form" style="display: block; margin-left: auto; margin-right: auto;" /></p>
<p>The components are ordered in a way such that for all <script type="math/tex">% <![CDATA[
i < j %]]></script>, there are no edges going from vertices corresponding to <script type="math/tex">A_j</script> into vertices corresponding to <script type="math/tex">A_i</script>. Labeling the vertices in a way that it preserves this order yields a matrix which is in the desired form. It is also clear that <script type="math/tex">A_k</script> is irreducible, since its graph is strongly connected. <script type="math/tex">\Box</script></p>
<p><strong>4. The spectrum of graphs.</strong></p>
<p><strong>Definition 5. </strong>Let <script type="math/tex">\mathcal{G}</script> be a simple graph with vertices <script type="math/tex">V = \{1, 2, \dots, n \}</script>. Its adjacency matrix is defined as the matrix <script type="math/tex">A = (a_{i,j})_{i,j=1}^{n} \in \mathbb{R}^{n \times n}</script>, where <script type="math/tex">a_{i,j}</script> is given by</p>
<p style="text-align: center;">$$ a_{i,j} = \begin{cases} 1 & \text{if there is an edge in } \mathcal{G} \text{ connecting } i \text{ and } j \\ 0 & \text{otherwise}. \end{cases} $$</p>
<p>The set</p>
<p style="text-align: center;">$$ \sigma(\mathcal{G}) = \{ \lambda: \lambda \text{ is an eigenvalue of } A \} $$</p>
<p>is called the <em>spectrum of the graph</em>.</p>
<p>Note that since an adjacency matrix is always symmetric, the spectrum of a graph is a subset of the real numbers. Two graphs <script type="math/tex">\mathcal{G}_1</script> and <script type="math/tex">\mathcal{G}_2</script> are said to be <em>isospectral</em> if <script type="math/tex">\sigma(\mathcal{G}_1) = \sigma(\mathcal{G}_2)</script>, that is, they have the same spectrum. If two graphs have the same spectrum it is not necessarily true that they are isomorphic, for example the two graphs</p>
<p><img src="https://nonemptyspaces.files.wordpress.com/2015/11/isospectral.jpg" alt="isospectral" style="display: block; margin-left: auto; margin-right: auto;" /></p>
<p>are isospectral but not isomorphic. Although it is not obvious that the spectrum of a graph contains useful information, it is very much so, as the following theorem illustrates.</p>
<p><strong>Theorem 2. </strong>The graph <script type="math/tex">\mathcal{G}</script> is bipartite (that is, its chromatic number is <script type="math/tex">2</script>) if and only if for every eigenvalue <script type="math/tex">\lambda</script> of <script type="math/tex">\mathcal{G}</script>, <script type="math/tex">-\lambda</script> is also an eigenvalue.</p>
<p><em>Proof. </em>We only show one direction. Suppose that <script type="math/tex">\mathcal{G}</script> is bipartite. Then its adjacency matrix <script type="math/tex">A</script> looks like</p>
<p style="text-align: center;">$$ A = \begin{pmatrix} 0 & B \\ B^T & 0 \end{pmatrix} $$</p>
<p>for some (not necessarily square) matrix <script type="math/tex">B \in \mathbb{R}^{k \times l}</script>. If <script type="math/tex">\lambda</script> is an eigenvalue of <script type="math/tex">A</script> with eigenvector <script type="math/tex">x</script>, then <script type="math/tex">x</script> can be split up in two blocks</p>
<p style="text-align: center;">$$ x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad x_1 \in \mathbb{R}^k, x_2 \in \mathbb{R}^{n - k}. $$</p>
<p>It is easy to see that the vector</p>
<p style="text-align: center;">$$ \tilde{x} = \begin{pmatrix} x_1 \\ -x_2 \end{pmatrix} $$</p>
<p>is an eigenvector of <script type="math/tex">A</script> with the eigenvalue <script type="math/tex">-\lambda</script>.</p>
<p>The other direction is more difficult. It utilizes the theory developed by Perron and Frobenius. Although it is not complicated, it requires a lengthy buildup. For details, see [3], Chapter 11 Problem 19. <script type="math/tex">\Box</script></p>
<p>As one can easily see, there are only finitely many subsets in <script type="math/tex">\mathbb{R}</script> which can be obtained as a spectrum for some graph with <script type="math/tex">n</script> vertices. As such, one would expect (at least I did) that isospectral graphs are not that common. The following theorem says exactly the opposite.</p>
<p><strong>Theorem 3. </strong>Let <script type="math/tex">\mathcal{T}_n</script> denote the set of trees with <script type="math/tex">n</script> vertices and define the set</p>
<p style="text-align: center;">$$ \mathcal{I}_n = \{ T \in \mathcal{T}_n: \text{ there is a } T^* \in \mathcal{T}_n \text{ which is isospectral with } T \}. $$</p>
<p>Then</p>
<p style="text-align: center;">$$ \lim_{n \to \infty} \frac{|\mathcal{I}_n|}{|\mathcal{T}_n|} = 1. $$</p>
<p>We shall not prove this theorem here, the proof can be found in the conference proceedings [2]. The main idea of the proof is that certain subgraphs of trees can be replaced with a different subgraph without changing the spectrum.</p>
<p><strong>References.</strong></p>
<p>[1] R. A. Brualdi and D. Cvetkovic, <em>A Combinatorial Approach to Matrix Theory and Its Applications</em>, Discrete Mathematics and Its Applications, Chapman & Hall, 2009</p>
<p>[2] F. Harary (editor), <em>New Directions in the Theory of Graphs</em>, New York, Academic Press, 1973</p>
<p>[3] L. Lovász, <em>Combinatorial Problems and Exercises</em>, 2nd edition, North-Holland, 1992</p>Tivadar Dankatheodore.danka@gmail.comTo study structure, tear away the flesh till only the bone shows. This haiku came to my mind yesterday while I was walking in the streets thinking about matrices and graphs. I am supposed to give an introductory lecture about them for undergraduate students in a few weeks, and it seemed like a good reason to finally write a post about mathematics.The Biologist, the Mathematician and the Stalker2015-11-08T00:00:00+00:002015-11-08T00:00:00+00:00https://cosmic-cortex.github.io/the-biologist-the-mathematician-and-the-stalker<p>The Biologist and I were waiting in a dirty bar, slowly sipping our cheap and bitter beer. The Stalker arrived early in the morning, the fog hasn’t even settled yet. We jumped inside our car (without roof, of course, as the Stalker requested), then slowly started our trip into the Zone. After sneaking past a few soldiers… wait, no. Scratch that. Here is the real story of our trip into the Chernobyl exclusion zone and to the ghost city Pripyat.</p>
<!--more-->
<p><strong>The preparations.</strong> The two of us (namely, the Biologist and the Mathematician, of whom I am the latter one) wanted to travel to the Chernobyl exclusion zone - from now on, the Zone - since our childhood. When the Biologist mentioned to me that he plans this trip, I immediately wanted to join. After a few days, we already had our plane ticket to Kiev and planned out the whole tour. When we told our plan to others, they thought that we are crazy. My girlfriend was afraid that I am going to die from radiation. My mother was afraid that I am going to die from radiation and my unborn children will die from radiation and everything I touch will turn to nuclear waste. (If our plane doesn’t crash on our way there, she said.)
The truth is, there is no need to worry. Nowadays it is safe to travel in the Zone, and hundreds of people do so every day. So, after a quick planning and researching phase, we were off to Ukraine to see the Zone for ourselves.</p>
<p><strong>The arrival.</strong> We arrived to the first checkpoint at 10:30 in the morning, where we met our guide, the Stalker. (Actually, he was not a Stalker in the precise sense. A stalker sneaks into the Zone illegally, while we went inside completely legal.)</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/exclusion-zone.jpg"><img class="alignnone wp-image-322" src="https://nonemptyspaces.files.wordpress.com/2015/10/exclusion-zone.jpg?w=300" alt="exclusion-zone" width="476" height="343" /></a></p>
<p>The Zone is divided into two main parts: a highly contaminated and a less contaminated one. The former is inside a circle of 10 km radius, while the latter one has a radius of roughly 30 km. The whole exclusion zone covers approximately 2500 $latex km^2 $, and it is approachable from several military checkpoints. In order to go into the Zone, you need a guide, an official permit and a detailed itinerary. The first surprise came at the first checkpoint, because there were several buses full of tourists waiting to go in before us. There are whole enterprises in Ukraine dedicated to organizing tours to the Zone.
After we gained entrance, we were off to the village Chernobyl, which, as it turned out, is completely populated.</p>
<p><strong>Chernobyl village.</strong> Chernobyl, the small village from which the nearby nuclear power plant was named after, was completely evacuated. A few villagers returned over the years, but the village is mostly populated by the workers building the new sarcophagus, administrative personel, etc. For them everything is provided for free by the government, for example the food and the housing.</p>
<p>Near the entrance of the village there are two paticularly interesting memorials. One of them is an iron statue of an angel with a trumpet.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/gabriel_monument.jpg"><img class="wp-image-325 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/gabriel_monument.jpg?w=680" alt="The third trumpet of revelation" width="771" height="518" /></a>According to the Book of Revelations, seven trumpets will sound, each one before an apocalyptic event. The third trumpet will bring a falling star.</p>
<blockquote><b>"The third angel sounded his trumpet, and a great star, blazing like a torch, fell from the sky on a third of the rivers and on the springs of water— the name of the star is Wormwood. A third of the waters turned bitter, and many people died from the waters that had become bitter".</b></blockquote>
<p>After the explosion at the nuclear plant, some people thought that this is the event what was predicted. It is also worth to note that the word chernobyl is the ukranian word for <a href="https://en.wikipedia.org/wiki/Artemisia_vulgaris">artemisia vulgaris</a>, or in english, common wormwood.</p>
<p>Right next to the angel, there is a graveyard for the evacuated villages.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/village_graveyard.jpg"><img class="wp-image-328 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/village_graveyard.jpg?w=680" alt="village_graveyard" width="774" height="520" /></a></p>
<p>More than 100 0000 people were evacuated from more than 100 villages. Along this path, each village has its own cross. After this brief stop, we were on our way to the center of the zone.</p>
<p><strong>The Russian Woodpecker.</strong> Besides the nuclear power plant, there are (or to be more precise, <em>were</em>) more Soviet military-operated objects in the zone. The biggest and most interesting one is a huge <a href="https://en.wikipedia.org/wiki/Over-the-horizon_radar">over-the-horizon</a> radar called <a href="https://en.wikipedia.org/wiki/Duga_radar">DUGA-3</a>, which had the nickname Russian Woodpecker. It broadcasted extremely powerful shortwave signals, which bounced off from the ionosphere, therefore extending its range over the horizon. It required immense power to do this, in fact it was so powerful such that <em>every</em> shortwave frequency in Europe was disrupted by a <a href="https://www.youtube.com/watch?v=aOMVdOc9UbE">sharp clicking sound</a>, hence the name Russian Woodpecker. This went on for more than 10 years, and since it was a Soviet military project, no one know what it really was. There were speculations about Soviet mind control and weather control experiments, but of course the truth was much simpler.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_1.jpg"><img class="wp-image-330 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_1.jpg?w=680" alt="woodpecker_1" width="772" height="519" /></a></p>
<p>Our first glimpse was this. The structure (which is basically a really-really large piece of scrap now) is located deep within the forest, and its area is guarded.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_2.jpg"><img class="wp-image-331 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_2.jpg?w=680" alt="woodpecker_2" width="774" height="520" /></a></p>
<p>This thing is so ungodly huge that you cannot even fit it in a picture from this location. It is 90 meters high and 750 meters long.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_3.jpg"><img class="wp-image-332 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_3.jpg?w=680" alt="woodpecker_3" width="773" height="1026" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_4.jpg"><img class="wp-image-333 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/woodpecker_4.jpg?w=680" alt="woodpecker_4" width="774" height="1155" /></a></p>
<p>If you look at it from the right angle, you can see interesting patterns. The americans had satelite images of this thing, and they thought that this is a children’s camp. An interesting fact is that the construction of this radar cost more than the entire nuclear power plant. After spending a few minutes around, wondering about how big this thing is, we were off to see the reactor.</p>
<p><strong>On the road to Reactor No. 4.</strong> Before we arrived to the reactor where the explosion happened, we saw a few interesting things on the road. One of them was a village, which was completely demolished and buried, except the kindergarten building. It was the only building which was made from concrete, the others were made from wood. You can wash the radioactive dust off from concrete, but you cannot do that with wood.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_1.jpg"><img class="wp-image-335 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_1.jpg?w=680" alt="kindergarten_1" width="774" height="526" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_2.jpg"><img class="wp-image-336 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_2.jpg?w=680" alt="kindergarten_2" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_3.jpg"><img class="wp-image-337 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/kindergarten_3.jpg?w=680" alt="kindergarten_3" width="770" height="523" /></a></p>
<p>After the short visit in the kindergarten, we saw the reactors for the first time, although from a distance.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_1.jpg"><img class="wp-image-339 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_1.jpg?w=680" alt="cooling_channel_1" width="765" height="520" /></a></p>
<p>On the background you can see the reactors (with the new sarcophagus under construction) and on the right side you can see the cooling channel for the reactors.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_2.jpg"><img class="wp-image-340 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_2.jpg?w=680" alt="cooling_channel_2" width="762" height="518" /></a></p>
<p>The Soviets planned to expand the power plant with two more reactors, but the constructions were abandoned after the accident. This one was abandoned at 95% competion.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_3.jpg"><img class="wp-image-341 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/cooling_channel_3.jpg?w=680" alt="cooling_channel_3" width="762" height="579" /></a></p>
<p>What was the cooling channel is now the home of a huge catfish population. Since they are not hunted (because their flesh is tainted with radioactive isotopes and you should not eat from it), sometimes they grow to a very large size, often larger then a meter.</p>
<p><strong>The reactors and the Red Forest area.</strong> As I mentioned, the Chernobyl Power Plant had 4 reactors and the accident happened in the 4th one. It is still unclear what caused the explosion, presumably human error. After the explosion, nuclear particles were sprayed into the atmosphere, eventually scattering throughout Europe, contaminating what they had touched. The radioactive dust covered the top layer of the soil in a large proximity of the nuclear reactor, which had to be removed, bagged and buried. Now the contamination is mostly removed and the radiation inside the zone is aproximately 10 times the cosmic radiation. (Which we receive all the time.) Although there are some radiation hotspots, they are very small. (At least, the ones we saw.) There are still a large amount of radioactive material inside the reactor, it has been sealed with a concrete sarcophagus. Now it is completely safe to approach the reactor, and many people do this every day, for example the workers building the new sarcophagus.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_1.jpg"><img class="wp-image-344 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_1.jpg?w=680" alt="reactor_1" width="773" height="525" /></a> Reactor No. 4, where the accident happened. It is clearly visible that the old concrete sarcophagus is crumbling. Which is a problem, since this is the only thing between the huge amount of radioactive material and the outside world. But, as the cranes can tell, the new one is on its way. They started the construction in 2007 and it will be ready in 2017.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/rector_4.jpg"><img class="wp-image-347 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/rector_4.jpg?w=680" alt="rector_4" width="773" height="525" /></a></p>
<p>This large dome is the new sarcophagus. It has been built upon railroad tracks and has a nuclear power plant-shaped hole in the front. After the construction is finished, it will be slid over the reactor.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_2.jpg"><img class="wp-image-345 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_2.jpg?w=680" alt="reactor_2" width="776" height="527" /></a></p>
<p>The Mathematician (that is, me) and the Reactor.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_3.jpg"><img class="wp-image-346 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/reactor_3.jpg?w=680" alt="reactor_3" width="770" height="523" /></a></p>
<p>The Biologist and the Reactor. His t-shirt is an advertisment for a hungarian home brewery <a href="https://www.facebook.com/MonyoBrewingCo">MONYO Brewing Co.</a> They produce a beer named Fukushima Heavy Water to raise awareness for earth pollution. (The beer is also green-colored and tastes very good.) My friend contacted the brewery a few weeks before the trip with the idea, and they sent us a few bottles of beer and a t-shirt to wear in front of the reactor. Thanks for the beer again!</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/red_forest_1.jpg"><img class="wp-image-350 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/red_forest_1.jpg?w=680" alt="red_forest_1" width="770" height="517" /></a></p>
<p>The Pripyat town sign is located behind the reactor, in front of the Red Forest. When the explosion happened, the wind was blowing in the direction of Pripyat and the nearby forest. The forest received a huge dose of pollution and the dying trees turned to red, hence the name Red Forest.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/red_forest_2.jpg"><img class="wp-image-351 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/red_forest_2.jpg?w=680" alt="red_forest_2" width="773" height="525" /></a></p>
<p>The Red Forest is still forbidden. Although most of the radiation was cleared up and washed away, the radiation is still much higher than the usual.</p>
<p><strong>The ghost city Pripyat.</strong> Pripyat is located northwest of the reactor. The city was a Soviet propaganda city, among its population of 50,000 people were the operators of the power plant, the engineers, and in general Soviet intellectuals. Now (aside from the tourists) it is empty and a forest has grown in its place. It is like a glimpse of the earth after the human civilization. From now on, I will (mostly) let the pictures do the talking.</p>
<p>First we visited a coffee shop near the Pripyat river.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_1.jpg"><img class="wp-image-353 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_1.jpg?w=680" alt="pripyat_1" width="771" height="524" /></a> An abandoned coffee shop is not that interesting, but this one had some very beautiful mosaic glass inside, which was partly destroyed, making it more beautiful. The second building we visited was a movie theatre called Prometheus.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_2.jpg"><img class="wp-image-354 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_2.jpg?w=680" alt="pripyat_2" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_3.jpg"><img class="wp-image-355 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_3.jpg?w=680" alt="pripyat_3" width="770" height="523" /></a></p>
<p>This is the abandoned screening room. The picture is slightly deceiving, since the room was pitch black, we couldn’t even see each other, only the two exits. The picture was taken with long exposure, making the torn screen and a few seats remaining there visible.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_4.jpg"><img class="wp-image-356 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_4.jpg?w=680" alt="pripyat_4" width="764" height="519" /></a></p>
<p>A lonely piano. It can be found in a building right next to the movie theater.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_5.jpg"><img class="wp-image-357 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_5.jpg?w=200" alt="pripyat_5" width="481" height="708" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_6.jpg"><img class="wp-image-358 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_6.jpg?w=680" alt="pripyat_6" width="760" height="577" /></a></p>
<p>The ghosts of Pripyat. There are quite a few graffitis in the city, some of them are very haunting.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_7.jpg"><img class="wp-image-359 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_7.jpg?w=680" alt="pripyat_7" width="762" height="518" /></a></p>
<p>An abandoned supermarked at the main street. Which, of course was called Lenin avenue. A cultural center was also here on this street. We went in from the back, through a propaganda room.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_8.jpg"><img class="wp-image-360 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_8.jpg?w=680" alt="pripyat_8" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_9.jpg"><img class="wp-image-361 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_9.jpg?w=680" alt="pripyat_9" width="774" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_10.jpg"><img class="wp-image-362 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_10.jpg?w=680" alt="pripyat_10" width="775" height="521" /></a></p>
<p>“Learn, learn, learn!” V. I. Lenin.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_11.jpg"><img class="wp-image-363 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_11.jpg?w=680" alt="pripyat_11" width="773" height="521" /></a></p>
<p>The largest theatre of the city was located in this building.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_12.jpg"><img class="wp-image-364 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_12.jpg?w=680" alt="pripyat_12" width="762" height="518" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_13.jpg"><img class="wp-image-365 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_13.jpg?w=680" alt="pripyat_13" width="762" height="518" /></a></p>
<p>I found this old and torn mathematics book on the floor.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_14.jpg"><img class="wp-image-366 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_14.jpg?w=680" alt="pripyat_14" width="762" height="518" /></a></p>
<p>This is an another auditorum, but this one is located in the cultural center.</p>
<p>The next room we visited was the iconic gymnasium. If you saw some pictures taken in Pripyat, chances are you have seen this view.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_15.jpg"><img class="wp-image-367 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_15.jpg?w=680" alt="pripyat_15" width="779" height="519" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_16.jpg"><img class="wp-image-368 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_16.jpg?w=680" alt="pripyat_16" width="780" height="524" /></a></p>
<p>The next picture is one of my favourite. Look how life found its way through the wooden floor!</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_17.jpg"><img class="wp-image-369 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_17.jpg?w=680" alt="pripyat_17" width="768" height="522" /></a></p>
<p>The amusement park with the iconic ferris wheel is located near. This place was never used, nobody ever rode the ferris wheel. The opening was scheduled after a few days of the accident.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_18.jpg"><img class="wp-image-370 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_18.jpg?w=680" alt="pripyat_18" width="777" height="522" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_20.jpg"><img class="wp-image-372 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_20.jpg?w=680" alt="pripyat_20" width="773" height="525" /></a> <a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_21.jpg"><img class="wp-image-373 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_21.jpg?w=680" alt="pripyat_21" width="768" height="1130" /></a></p>
<p>After the amusement park we visited an another kindergarten, one of the 16 kindergartens in the city. The average age was 26 years around the time, because of the many young families living here.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_22.jpg"><img class="wp-image-374 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_22.jpg?w=680" alt="pripyat_22" width="773" height="525" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_23.jpg"><img class="wp-image-375 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_23.jpg?w=680" alt="pripyat_23" width="775" height="522" /></a></p>
<p>One of the creepier rooms can be found here. Someone created a scene with dolls sitting around in a circle, which emits a strange atmosphere, like the city had started to live its own life after the evacuation. Which is, in a sense, true: life has found its way inside, but in the shape of a forest.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_24.jpg"><img class="wp-image-376 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_24.jpg?w=680" alt="pripyat_24" width="770" height="585" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_25.jpg"><img class="wp-image-377 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_25.jpg?w=680" alt="pripyat_25" width="770" height="518" /></a></p>
<p>Walking around the streets is quite spectacular.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_26.jpg"><img class="wp-image-378 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_26.jpg?w=680" alt="pripyat_26" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_27.jpg"><img class="wp-image-379 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_27.jpg?w=680" alt="pripyat_27" width="776" height="1158" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_28.jpg"><img class="wp-image-380 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_28.jpg?w=680" alt="pripyat_28" width="765" height="520" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_29.jpg"><img class="wp-image-381 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_29.jpg?w=680" alt="pripyat_29" width="775" height="521" /></a></p>
<p>We also had the chance to visit a schoolbuilding.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_30.jpg"><img class="wp-image-382 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_30.jpg?w=680" alt="pripyat_30" width="771" height="524" /></a></p>
<p>In one of the classrooms we found hundreds of gas masks lying on the floor. Not exactly sure why were these things there, but they made sure that this room is also one of the creepier ones.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_31.jpg"><img class="wp-image-383 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_31.jpg?w=680" alt="pripyat_31" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_32.jpg"><img class="wp-image-384 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_32.jpg?w=680" alt="pripyat_32" width="764" height="580" /></a></p>
<p>Another math book, another gas mask.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_33.jpg"><img class="wp-image-385 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_33.jpg?w=680" alt="pripyat_33" width="771" height="586" /></a> <a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_35.jpg"><img class="wp-image-387 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_35.jpg?w=680" alt="pripyat_35" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_36.jpg"><img class="wp-image-388 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_36.jpg?w=680" alt="pripyat_36" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_37.jpg"><img class="wp-image-389 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_37.jpg?w=680" alt="pripyat_37" width="770" height="523" /></a> <a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_38.jpg"><img class="wp-image-390 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_38.jpg?w=680" alt="pripyat_38" width="771" height="524" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_39.jpg"><img class="wp-image-391 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_39.jpg?w=680" alt="pripyat_39" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_40.jpg"><img class="wp-image-392 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_40.jpg?w=680" alt="pripyat_40" width="773" height="520" /></a></p>
<p>The Soviet Patriot newspaper. It is dated 23 April 1986, 3 days before the accident.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_41.jpg"><img class="wp-image-393 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_41.jpg?w=680" alt="pripyat_41" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_42.jpg"><img class="wp-image-394 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_42.jpg?w=680" alt="pripyat_42" width="765" height="520" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_43.jpg"><img class="wp-image-395 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_43.jpg?w=680" alt="pripyat_43" width="768" height="522" /></a></p>
<p>The swimming pool is an another iconic room. The liquidators (the personnel responsible for dealing with the radioactive debris after the accident) kept it operational up until 1996, but now it is in ruins.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_44.jpg"><img class="wp-image-396 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_44.jpg?w=680" alt="pripyat_44" width="768" height="518" /></a></p>
<p>Look at this view! It must have been nice to have a swim here.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_45.jpg"><img class="wp-image-397 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_45.jpg?w=680" alt="pripyat_45" width="770" height="523" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_46.jpg"><img class="wp-image-398 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_46.jpg?w=680" alt="pripyat_46" width="775" height="521" /></a></p>
<p>After visiting all these buildings, we climbed to the top of a 16 story building, one of the highest in the city.</p>
<p><strong>At the top of Pripyat.</strong> “Look! A city has grown inside a forest.”</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_48.jpg"><img class="wp-image-400 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_48.jpg?w=680" alt="pripyat_48" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_49.jpg"><img class="wp-image-401 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_49.jpg?w=680" alt="pripyat_49" width="765" height="520" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_50.jpg"><img class="wp-image-402 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_50.jpg?w=680" alt="pripyat_50" width="770" height="523" /></a> <a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_51.jpg"><img class="wp-image-403 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_51.jpg?w=680" alt="pripyat_51" width="774" height="526" /></a></p>
<p>In the next picture you can see the DUGA-3 radar in the background. It is that huge.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_52.jpg"><img class="wp-image-404 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_52.jpg?w=680" alt="pripyat_52" width="764" height="519" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_53.jpg"><img class="wp-image-405 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_53.jpg?w=680" alt="pripyat_53" width="767" height="521" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_54.jpg"><img class="wp-image-406 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_54.jpg?w=680" alt="pripyat_54" width="770" height="523" /></a></p>
<p>This rooftop was the last stop of our trip. Before we left, we made a picture of us with the reactor in the background.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_55.jpg"><img class="wp-image-407 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/pripyat_55.jpg?w=680" alt="pripyat_55" width="767" height="521" /></a></p>
<p>Unfortunately, we did not find the room which grants wishes.</p>
<p><strong>National Chernobyl Museum.</strong> The next day we visited the National Chernobyl Museum in Kiev, which showed the human side of the accident. Up until that point, the whole accident was more or less an abstract thing for me. I know it happened, it did not affect me nor anyone else I know directly, I have not seen the suffering. This museum was mostly dedicated to the victims. For the liquidators, who were sent in right after the explosion to clear the debris without a decent protective gear, sentencing them to death. (And in the photos, which they took right before they were sent to the reactors roof, you can see most of them smiling.) For the children, who got sick and eventually died from the radioactive dust sprayed into the atmosphere.</p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/museum_1.jpg"><img class="wp-image-415 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/museum_1.jpg?w=680" alt="museum_1" width="761" height="507" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/museum_2.jpg"><img class="wp-image-416 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/museum_2.jpg?w=680" alt="museum_2" width="758" height="505" /></a></p>
<p><a href="https://nonemptyspaces.files.wordpress.com/2015/10/museum_3.jpg"><img class="wp-image-417 aligncenter" src="https://nonemptyspaces.files.wordpress.com/2015/10/museum_3.jpg?w=680" alt="museum_3" width="758" height="505" /></a></p>
<p>The museum is a shrine for the victims. In the largest room, which is dedicated for the children, there is an altar with the pictures of thousands of children, who died as a result of the disaster. The younger people were especially in danger, because the explosion released various radioactive isotopes, which embeds itself in various tissues in the body and then they radiate directly from there. For example, <a href="https://en.wikipedia.org/wiki/Iodine-131">iodine-131</a> can embed itself into the thyroid gland, eventually causing thyroid cancer.</p>
<p>As closing thought, my opinion on nuclear energy. Most people think that the Zone, the Reactor and the ghost city (along with Fukushima prefecture in Japan) symbolizes the dangers of nuclear energy. I do not think so. For me, it symbolizes human negligence and stupidity in two ways. For one, the explosion in Reactor No. 4 was (as far as we know) caused by not just human error but flat out carelessness, which could have been prevented easily. Second, that when a tragedy like this happens (it happened two times in the history of the mankind), blaming the nuclear energy is insane. Decommissioning nuclear plants all over the world after an accident is like putting a halt to air traffic after the captain drives his plane into a mountain. Nuclear power is currently one of the cheapest and cleanest nonrenewable source of energy on the planet, which is unfortunately tainted with stupidity.</p>Tivadar Dankatheodore.danka@gmail.comThe Biologist and I were waiting in a dirty bar, slowly sipping our cheap and bitter beer. The Stalker arrived early in the morning, the fog hasn’t even settled yet. We jumped inside our car (without roof, of course, as the Stalker requested), then slowly started our trip into the Zone. After sneaking past a few soldiers… wait, no. Scratch that. Here is the real story of our trip into the Chernobyl exclusion zone and to the ghost city Pripyat.Know your brain!2015-07-09T00:00:00+00:002015-07-09T00:00:00+00:00https://cosmic-cortex.github.io/know-your-brain<p>Aside from mathematics, I have other interests. Ever since I started to use my brain for mathematics, I am interested about how thinking works. Since problem solving is my day job, I have plenty of opportunities to observe my cognitive processes and I have a passion for optimizing my brainworks. Thinking about thinking is a very interdisciplinary field, ranging from neuroscience to psychology, and I find a lot of joy in reading about it and trying to adapt it in practice. Why not write about it then? I decided to write about my neverending quest in exploring and enhancing the workings of my mind. I am no brain scientist, merely a brain user (most of the time, at least), therefore these posts will serve as a “How to use your brain 101”, rather than an exquisite scientific investigation.</p>
<!--more-->
<p>There is a wide range of physiological and psychological tools which can help to enhance your cognitive functions. This time I am going to talk about neurotransmitters and how to exploit the hell out of them.</p>
<p><strong>1. Neurons and their signals.</strong> Ever thought about what is thought? How it forms and how it is stored in your brain? Looking at it from a natural scientific viewpoint, thoughts are nothing else but chemical processes and electrical impulses in your brain, created and channeled by nerve cells, or in an other name, <a href="https://en.wikipedia.org/wiki/Neuron">neurons</a>. Neurons can create and transfer electrical and chemical signals, which represent information in your brain. They look like this.</p>
<p>[caption id=”attachment_263” align=”aligncenter” width=”629”]<a href="https://nonemptyspaces.files.wordpress.com/2015/07/neuron-296581_6401.png"><img class=" wp-image-263" src="https://nonemptyspaces.files.wordpress.com/2015/07/neuron-296581_6401.png?w=300" alt="neuron-296581_640" width="629" height="298" /></a> A neuron.[/caption]</p>
<p class="">The tentacles on the right are called axons, they connect the cell to other nerve cells. The smaller tentacles on the left are called dendrites. These are the sites where other neurons connect to. An average adult human brain contains $latex 86 times 10^{9} $ neurons. This is a lot. You can find <a href="https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons">a list of animals by number of neurons</a> at wikipedia. The number of connections are even larger, therefore your neural network can store and process a huge amount of information. The neural network of a mammal looks something like this.</p>
<p>[caption id=”” align=”aligncenter” width=”634”]<img class="" src="http://i.kinja-img.com/gawker-media/image/upload/spmhtd2u9mubb9tnlobc.jpg" alt="" width="634" height="287" /> The neural network of a mammal. Source: http://gizmodo.com/this-is-the-first-detailed-map-of-a-mammals-neural-net-1557461799[/caption]</p>
<p>Basically, the rule of thumb is</p>
<p style="text-align:center;">$latex text{larger network} + text{more connections} = text{more smart} &s=1$.</p>
<p style="text-align:left;">In the following, I am going to talk about the latter component, namely the connections and the quality of these connections.</p>
<p><strong>2. Neurotransmitters.</strong> The electrical and chemical signals travel between neurons. This process is called <a href="https://en.wikipedia.org/wiki/Neurotransmission">neurotransmission</a>. An axon is not directly connected to a dendrite of an other neuron, but there is a small gap called synapse in between them, where various chemical processes happen. The signal is transfered across the synapse by the aid of <a href="https://en.wikipedia.org/wiki/Neurotransmitter">neurotransmitters</a>. The whole process looks like this.</p>
<p>[caption id=”” align=”aligncenter” width=”604”]<img src="https://upload.wikimedia.org/wikipedia/commons/3/30/SynapseSchematic_en.svg" alt="" width="604" height="321" /> Neurotransmission. Source: wikipedia[/caption]</p>
<p>In order to things go smoothly, you should have a decent supply of neurotransmitters and an abundant number of receptors.</p>
<p>I am going to talk about three neurotransmitters: <a href="https://en.wikipedia.org/wiki/Dopamine">dopamine</a>, <a href="https://en.wikipedia.org/wiki/Serotonin">serotonin</a> and <a href="https://en.wikipedia.org/wiki/Acetylcholine">acetylcholine</a>.</p>
<p><strong>2.1.</strong><strong> Dopamine.</strong> Ever felt unmotivated, depressed, lacking sexual drive, or having trouble focusing? These are some of the symptoms of dopamine deficiency. A note of caution however, before I proceed. If you lack dopamine you have these symptoms, but it is not necessarily true the other way! Depression or lack of motivation and focus can be caused by several other problems, even psychological ones.</p>
<p>So there is this tiny molecule, which looks like this.</p>
<p>[caption id=”” align=”aligncenter” width=”394”]<img class="" src="https://upload.wikimedia.org/wikipedia/commons/6/6c/Dopamine2.svg" alt="" width="394" height="182" /> Skeletal formula of dopamine. Source: wikipedia[/caption]</p>
<p>Dopamine is connected with the so-called <a href="https://en.wikipedia.org/wiki/Mesolimbic_pathway">reward pathway</a>. It is released when you experience something that gives you joy, for example having sex, eating a bar of candy, proving a mathematical theorem, etc. (Ok, I admit, the latter may not work for everyone.) For me, occasionally studying something completely else than mathematics gives me a huge dopamine boost. Right now, collecting data and researching this topic has a very uplifting effect on me, which enhances my dopamine level. In fact, one of the very reason why I started this blog is to hack my dopamine levels.</p>
<p>Although releasing a lot of dopamine is a thing everyone should yearn, there is a catch. Having more dopamine gives you much joy in itself through some of its beneficial effects, so having some of it should cause to release even more of it? No. Simply saying, the more expected the reward is, the less activation can be seen in the dopamine neurons. This phenomena is called <em><a href="http://www.scholarpedia.org/article/Reward_signals#Reward_prediction_error">reward prediction error</a>.</em> Adopting a practical viewpoint, my point is this. Stuffing candy into yourself, shopping extensively or surfing online the whole day does not gives you much reward, and it certainly does not enhance your life. The good type of joy can be found in the small things. A really interesting article you find. A kiss from your darling in the morning. Understanding a theorem for the first time. Discovering a small pattern in the mathematical objects which you work with at the moment. The list is long, and I hope you get my point. Setting small goals and completing them is also a good idea to enhance motivation through the workings of your neurochemistry. If, for example, you want to complete a marathon, you should set milestones which inevitably lead to your goal. Focus on the next few hundred yards instead of only thinking about the whole 26 miles.</p>
<p>Bottom line is, the beneficiary effects of dopamine includes enhanced cognition, improved memory and a boost in motivation. Dopamine deficiency can cause depression, lack of motivation, lack of focus and problems with motor skills. There are some medical conditions related to dopamine, for example <a href="https://en.wikipedia.org/wiki/Parkinson%27s_disease">Parkinson’s disease</a> or <a href="https://en.wikipedia.org/wiki/Attention_deficit_hyperactivity_disorder">attention deficit disorder</a>. The medication for these directly increase dopamine levels in the body.</p>
<p>The level of dopamine can also be increased by, for example, eating right, having a lot of physical exercise, getting plentiful of sleep and using the reward system of your brain cleverly. Using your reward system is kind of a balancing act, but hey, you have plenty of time to try!</p>
<p>You can find a ton of information about dopamine online. For further research, I recommend <a href="http://blog.idonethis.com/the-science-of-motivation-your-brain-on-dopamine/">this post</a> on <a href="http://blog.idonethis.com">http://blog.idonethis.com</a>.</p>
<p><strong>2.2. Serotonin.</strong> Surely you must know what it is like to have a nice talk with your friends. You are at the pub or at a get-together, talking about everything, someone suggests an interesting topic, you debate, listen to each other’s argument, reach a conclusion, then sit back and have an another beer. You feel accepted and cared for. The satisfaction what you feel from connecting to each other and sharing your thoughts is related to serotonin being released.</p>
<p>[caption id=”” align=”aligncenter” width=”327”]<img src="https://upload.wikimedia.org/wikipedia/commons/c/c4/Serotonin-2D-skeletal.svg" alt="" width="327" height="230" /> Skeletal formula of serotonin. Source: wikipedia[/caption]</p>
<p>This little molecule is connected with social status. The bottom line is clear. If you have good friends you can count on, are appreciated in your workplace, and generally feel that you are a valuable member of society, you are happy. If your personal life is messy, you feel dismissed and looked upon at work or if you have no social life at all, you are not happy. This feeling of happyness or nonhappyness are caused by the level of serotonin in your body. Lack of serotonin can cause depression and even suicidal behavior.</p>
<p>Serotonin also affects risk-taking behavior, which is important in many aspects of life, for example being a scientist. How are going to tackle the next hard question nature (or in my case, mathematics) throws at you if you think you are not enough, if you feel like a failure, if you think you are underappreciated by your collegaues, or if you feel unsuccessful? You probably won’t.</p>
<p>It has been shown <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3055502/">in a recent research paper</a> that low levels of serotonin in humans cause depression and it influences many decision-making processes, for example, depletion of serotonin <a href="https://en.wikipedia.org/wiki/Precursor_%28chemistry%29">precursor</a> <a href="https://en.wikipedia.org/wiki/Tryptophan">L-tryptophan</a> causes decrease in cooperating behavior in the game <a href="https://en.wikipedia.org/wiki/Prisoner%27s_dilemma">prisoner’s dilemma</a>.</p>
<p>What can you do to have a decent level of serotonin? Well, for example, start by making friends with people who you look up to and who feel the same about you. Then of course, there is also the method of sleeping much and eating right. You can research online for nutrition advice regarding your serotonin levels.</p>
<p>For further reading, I recommend two blogposts. One is a post at <a href="http://neuroecology.wordpress.com">neuroecology</a> titled <a href="https://neuroecology.wordpress.com/2012/05/18/how-social-status-affects-your-brain/">How social status affects your brain</a>, the other can be found at <a href="http://brainposts.blogspot.com">Brain Posts</a> titled <a href="http://brainposts.blogspot.com/2011/09/serotonin-social-interaction-and-making.html">Serotonin, Social Interaction and Making Decisions</a>.</p>
<p><strong>2.3. Acetylcholine.</strong> Acetylcholine is probably the least famous of these three transmitters, but it is nonetheless very important. To emphasise this, a fun fact. The nerve gas <a href="https://en.wikipedia.org/wiki/Sarin">sarin</a> acts by messing with your acetylcholine level. Muscle control is impared as a result, which ultimately disables your breathing functions, at which point you suffocate.</p>
<p>[caption id=”” align=”aligncenter” width=”383”]<img src="https://upload.wikimedia.org/wikipedia/commons/2/21/Acetylcholine.svg" alt="" width="383" height="154" /> Skeletal formula of acetylcholine. Source: wikipedia[/caption]</p>
<p>Aside from muscle control, having a decent supply of acetylcholine enhances your responsiveness to visual, auditory and <a href="https://en.wikipedia.org/wiki/Somatosensory_system">somatosensory</a> stimulus. (Somatosensory system is just a fancy name for touch.)</p>
<p><a href="https://en.wikipedia.org/wiki/Alzheimer%27s_disease">Alzheimer’s disease</a> is associated with acetylcholine. It seems that it is caused by problems in the density of synaptic receptors. (That is, you may have enough neurotransmitters, but you cannot process them decently.)</p>
<p>Optimizing your acetylcholine level can be achieved by correct nutrition, exercise and a good amount of sleep. If you paid attention, you can notice that these three methods appear in every “how to boost my levels of…” section. These things should be taken seriously, if you want a productive life.</p>
<p>Bottom line is, these knowledge should be used to your advantage. It was an illuminating experience for me when I first learned about neurotransmitters and their effect. Realizing that some of my problems (in my case, lack of attention) can be caused by chemicals and not some tragic character flaw was kind of uplifting. Paying attention to your neurotransmitters constantly can be hard at first, but once you develop healthy mental and physical habits, these things take care of themselves.</p>Tivadar Dankatheodore.danka@gmail.comAside from mathematics, I have other interests. Ever since I started to use my brain for mathematics, I am interested about how thinking works. Since problem solving is my day job, I have plenty of opportunities to observe my cognitive processes and I have a passion for optimizing my brainworks. Thinking about thinking is a very interdisciplinary field, ranging from neuroscience to psychology, and I find a lot of joy in reading about it and trying to adapt it in practice. Why not write about it then? I decided to write about my neverending quest in exploring and enhancing the workings of my mind. I am no brain scientist, merely a brain user (most of the time, at least), therefore these posts will serve as a “How to use your brain 101”, rather than an exquisite scientific investigation.Universality and orthogonal polynomials2015-06-04T00:00:00+00:002015-06-04T00:00:00+00:00https://cosmic-cortex.github.io/universality-and-orthogonal-polynomials<p>There is a phenomenon in physics and mathematics which has been captivating the minds of scientists for a long time and it is called simply “universality”. In brief, a phenomenon exhibits universality if, be as wild and diverse in microscopic scale as possible, a clear pattern emerges if one looks at it from macrosopic scale. I first encountered this phenomena from a technical point of view while studying asymptotic behavior of orthogonal polynomials, and it got me too. In this post we will see some examples of systems exhibiting universal behavior and we shall see how do orthogonal polynomials enter the picture.</p>
<!--more-->
<p>First let’s see some examples. I’ve drawn these from [Deift] and [Tao]. For more details and examples, check out these awesome articles.</p>
<p><strong>Examples from physics and mathematics</strong></p>
<p><strong>1. Nontrivial zeros of the Riemann zeta function. </strong>The <a href="http://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> (<script type="math/tex">\zeta(z)</script> from now on) is one of the most interesting objects of mathematics for almost two hundred years. The <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a> says that if <script type="math/tex">z_0</script> is a zero of <script type="math/tex">\zeta(z)</script>, then either <script type="math/tex">z_0</script> is an even negative integer or <script type="math/tex">\Re(z_0) = \frac{1}{2}</script>. As most of you know, this hypothesis is one of the <a href="http://en.wikipedia.org/wiki/Millennium_Prize_Problems">Millenium Price Problems</a> stated by the Clay Institute, and many fine mathematicians attempted to solve this conjecture. Although it is an interesting topic which deserves an entire post, I won’t discuss it in detail. There are many books and articles about this, I recommend <a href="http://www.math.jhu.edu/~wright/RH2.pdf">this one</a>. (I recommend this even if you solved Riemann conjecture and kept it as a secret.)
For now, let’s just study the zeros on the line <script type="math/tex">\frac{1}{2} + iy</script>. (This line is called the critical line.)</p>
<figure>
<img src="https://nonemptyspaces.files.wordpress.com/2015/05/riemann_zeta_zeros.jpg" alt="riemann_zeta_zeros" />
<figcaption> The plot of the absolute value of the Riemann zeta function in the critical line </figcaption>
</figure>
<p>What we can observe here is that the zeros of <script type="math/tex">\zeta(z)</script> in the critical line are not too far nor too close to each other. No clustering and no large gaps can be seen here. As it turns out, in some sense, it is true in general. The number theorist Hugh Montgomery discovered that if <script type="math/tex">0 \leq z_1 \leq z_2 \leq \dots</script> denotes the imaginary part of the zeros on the critical line, then after scaling them as</p>
<p style="text-align:center;">$$ \widehat{z_j} := \frac{z_j \log z_j}{2\pi} $$,</p>
<p>we obtain that</p>
<p style="text-align:center;">$$ R(a,b) := \lim_{n \to \infty} \frac{1}{n} |\{ (k_1, k_2): k_1 \neq k_2, 1 \leq k_1, k_2 \leq n, \widehat{z}_{k_1} - \widehat{z}_{k_2} \in (a,b) \}| \\
= \int_{a}^{b} 1 - \big(\frac{\sin \pi x}{\pi x} \big)^2 dx. $$</p>
<p>In other words, the distance of zeros follows a specific pattern. This formula, as we shall see later, will appear very unexpectedly in a different area of mathematics.</p>
<p><strong>2. Scattering theory. </strong>Suppose that a nucleus is being shot with neutrons. Depending on the energy level of the neutron, it can bounce back, bounce off or go straight through the nucleus. If we count that how many neutrons pass through the nucleus and plot it against the energy level, we obtain this.</p>
<figure>
<img src="https://terrytao.files.wordpress.com/2010/09/gadolinium-156.png" />
<figcaption> Scattering plot for Gadolinium 156 nucleus. (The source of the image is <a href="https://terrytao.wordpress.com/2010/09/14/a-second-draft-of-a-non-technical-article-on-universality/"></a> , but the original one can be found in the article C. Coceva and M. Stefanon, Experimental aspects of the statistical theory of nuclear spectra fluctuations, Nuclear Physics A, 1979, vol. 315.)</figcaption>
</figure>
<p>Looking at the energy levels where the count peaks - the so-called scattering resonances - we can observe the same behavior as in the zeros of the Riemann zeta function. They are again not too far nor too close to each other in average. This phenomenon was successfully modelled by <a href="http://en.wikipedia.org/wiki/Eugene_Wigner">Eugene Wigner</a> using random matrix theory. We will see later that the model proposed by Wigner can be used to describe a large number of phenomena in the nature.</p>
<p><strong>3. Buses in Cuernavaca, Mexico. </strong>In the country of Mexico, there is a town called <a href="http://en.wikipedia.org/wiki/Cuernavaca">Cuernavaca</a>. For reasons unknown to me, there is no organized public transport system in there. If you own a bus, you can participate in public transportation by picking up passengers along routes, who buy bus tickets directly from you. The system is basically governed by a set of “rules”:</p>
<ul>
<li>the buses are owned by the drivers,</li>
<li>whom have to maximize their profit by picking up as many passengers as possible,</li>
<li>and there are so-called spotters, who informs the drivers about when did the previous bus left the next bus stop.</li>
</ul>
<p>If the previous bus picked up the passengers recently, the driver must slow down to maximize the number of passengers. On the other hand, if the previous bus left long ago, he should speed up, avoiding that somebody overtakes him.</p>
<p>Can you guess what will we find if we study the time elapsed between two subsequent buses at a bus stop? You are probably right: the waiting time between buses is neither too long nor too small. The physicists Milan Krbalek and Petr Seba noticed this behavior when attending a conference at the city. <a href="http://arxiv.org/abs/nlin/0001015">In their paper</a> they did a detailed analysis of this and modelled it with random matrices. They calculated the distribution of time between subsequent buses and obtained the following.</p>
<figure>
<img src="https://terrytao.files.wordpress.com/2010/09/busgraph.png" />
<figcaption> Source: <a href="https://terrytao.wordpress.com/2010/09/14/a-second-draft-of-a-non-technical-article-on-universality/"></a> Original: M. Krbalek and P. Seba, The statistical properties of the city transport in Cuernavaca (Mexico) and Random matrix ensembles.</figcaption>
</figure>
<p>The histogram is the empirical data and the continuous curve is the prediction given by their model. Can you guess what kind of model did they propose? Yes, you are right: random matrices.</p>
<p><strong>A common ground: random matrix theory</strong></p>
<p><strong>1. A crash-course in random matrix theory. </strong>In order to get a good grip on universality limits (or at least, on their significance), we have to talk about random matrix theory very-very briefly. For a deeper discussion, I recommend the book [Deift-Gioev]. There are many excellent books out there, but from my point of view (to be more precise, from the orthogonal polynomials point of view) it is the most useful.</p>
<p>Let <script type="math/tex">H_n \subseteq{C}^{n \times n}</script> denote the <script type="math/tex">n \times n</script> <a href="http://en.wikipedia.org/wiki/Hermitian_matrix">Hermitian matrices</a>. Select a matrix <script type="math/tex">\mathcal{M}</script> from <script type="math/tex">H_n</script> randomly such that the probability of <script type="math/tex">\mathcal{M}</script> being chosen from <script type="math/tex">E \subseteq H_n</script> is</p>
<p style="text-align:center;">$$ P(\mathcal{M} \in E) = \int_E \lambda e^{-\text{tr} V(M)} dM, $$</p>
<p>where<br />
<script type="math/tex">dM</script> is the product measure on the algebraic independent entries of <script type="math/tex">\mathcal{M}</script>,<br />
<script type="math/tex">\lambda</script> is a normalizing constant (so that <script type="math/tex">P(\mathcal{M} \in H_n) = 1)</script>,<br />
<script type="math/tex">V: \mathbb{R} \to \mathbb{R}</script> is a function which tends to <script type="math/tex">0</script> at <script type="math/tex">\pm \infty</script> sufficiently fast.</p>
<p>The expression <script type="math/tex">V(\mathcal{M})</script> is interpreted in the sense of <a href="http://en.wikipedia.org/wiki/Functional_calculus">functional calculus</a>. (If this bothers you, feel free to assume that <script type="math/tex">V(x)</script> is a polynomial <script type="math/tex">\sum_{k=0}^{m} a_k x^k</script>. This way <script type="math/tex">V(\mathcal{M})</script> can be interpreted as <script type="math/tex">V(\mathcal{M}) = \sum_{k=0}^{m} a_k \mathcal{M}^k</script>.)</p>
<p><strong>Example. </strong>Let <script type="math/tex">V(x) = x^2</script>, then the probability measure</p>
<p style="text-align:center;">$$ P(\mathcal{M} \in E) = \int_E \lambda e^{-\text{tr}(M^2)} dM $$</p>
<p>is called a Gaussian unitary ensemble.</p>
<p>We only care about the eigenvalues of a random matrix. Why? Because they can be used to model various phenomena. For one, all three examples which we’ve seen can be modelled with the eigenvalues of a random matrix. (Do not forget that since we are looking at Hermitian matrices, all the eigenvalues are real.) We denote the eigenvalues of <script type="math/tex">\mathcal{M}</script> with <script type="math/tex">\lambda_1(\mathcal{M}) \leq \dots \leq \lambda_n(\mathcal{M})</script>. (If not necessary, the dependence on <script type="math/tex">\mathcal{M}</script> is omitted.)
For our purposes it is enough to look at the function <script type="math/tex">p(x_1, \dots, x_n)</script> for which</p>
<p style="text-align:center;">$$ P((\lambda_1(\mathcal{M}), \dots, \lambda_n(\mathcal{M})) \in E) = \int_E p(x_1, \dots, x_n) dx_1 \dots dx_n. $$</p>
<p>This is the joint probability density of the ordered <script type="math/tex">n</script>-tuple of eigenvalues. With it, we can define the <strong>k-point correlation function</strong> as</p>
<p style="text-align:center;">$$ R(x_1, \dots, x_k) = \frac{n!}{(n-k)!} \int \dots \int p(x_1, \dots, x_n) dx_{k+1} \dots dx_n. $$</p>
<p>Note that <script type="math/tex">R_k</script> depends on <script type="math/tex">n</script>. As it can be <a href="http://arxiv.org/abs/solv-int/9804004">found in an article</a> of Craig A. Tracy and Harold Widom,</p>
<p><em>“It is, loosely speaking, the probability density that <script type="math/tex">k</script> of the eigenvalues, irrespective of order, lie in the infinitesimal neighborhoods of <script type="math/tex">x_1, \dots, x_k.</script> (It is not a probability density in the strict sense since its total integral equals <script type="math/tex">n!/(n-k)!</script> rather then <script type="math/tex">1</script>.)”</em></p>
<p>The main question for us is how does <script type="math/tex">R_k</script> behave when <script type="math/tex">n \to \infty</script>? To provide an answer, we will deploy our favourite (or at least, my favourite) tools: orthogonal polynomials.</p>
<p><strong>2. Orthogonal polynomials <script type="math/tex">\heartsuit</script> random matrices. </strong>Recall that the orthogonal polynomials with respect to a Borel measure <script type="math/tex">\mu</script> are defined as the unique orthonormal system of polynomials <script type="math/tex">\{p_n(x,\mu) = p_n(x)\}_{n=0}^{\infty}</script> for which <script type="math/tex">p_n(x) = \gamma_n x^n + \dots, \gamma_n > 0</script>. (For a brief introduction, see the <a href="http://en.wikipedia.org/wiki/Orthogonal_polynomials">wikipedia page</a> or the <a href="https://nonemptyspaces.wordpress.com/2015/05/22/the-zeros-of-orthogonal-polynomials/">first post</a>.) Every polynomial <script type="math/tex">\Pi_n</script> of degree <script type="math/tex">n</script> can be written as a linear combination of orthogonal polynomials</p>
<p style="text-align:center;">$$ \Pi_n(x) = \sum_{k=0}^{n} a_k p_k(x). $$</p>
<p>If we introduce the so-called reproducing kernel (or kernel in short)</p>
<p style="text-align:center;">$$ K_n(x,y,\mu) = K_n(x,y) = \sum_{k=0}^{n} p_k(x)p_k(y), $$</p>
<p>the above linear combination can be written in the form</p>
<p style="text-align:center;">$$ \Pi_n(x) = \int K_n(x,y) \Pi_n (y) d\mu(y). $$</p>
<p>Now let <script type="math/tex">\mu</script> be a Borel measure on the real line, and for simplicity suppose that <script type="math/tex">d\mu(x) = w(x) dx</script> with <script type="math/tex">w(x) = e^{-V(x)}</script>, as used in the previous section about random matrices. It turns out that the k-point correlation function <script type="math/tex">R_k(x_1, \dots, x_k)</script> can be expressed in terms of the reproducing kernel as</p>
<p style="text-align:center;">$$ R_k(x_1, \dots, x_k) = \det (\sqrt{w(x) w(y)}K_n(x_i,x_j))_{i,j=1}^{k}. $$</p>
<p>If we introduce the normalized reproducing kernel <script type="math/tex">\widetilde{K_n}(x,y) = w(x)^{1/2}w(y)^{1/2} K_n(x,y)</script>, the above formula can be written as</p>
<p style="text-align:center;">$$ R_k(x_1, \dots, x_k) = \det (\widetilde{K_n}(x_i,x_j))_{i,j=1}^{k}. $$</p>
<p>Thus, using orthogonal polynomials, this connection can be exploited heavily. In the following section we shall see a way to do this.</p>
<p><strong>Universality limits</strong></p>
<p><strong>1. A first look at universality limits.</strong> If <script type="math/tex">\mu</script> is a Borel measure on <script type="math/tex">\mathbb{R}</script> with finite moments, we say that <strong>universality holds at <script type="math/tex">x_0</script> </strong>if</p>
<p style="text-align:center;">$$ \lim_{n \to \infty} \frac{K_n(x_0 + a/n, x_0 + b/n)}{K_n(x_0, x_0)} = \frac{\sin \pi (b-a)}{\pi (b-a)} $$</p>
<p>for <script type="math/tex">a, b \in \mathbb{R}</script> uniformly in compact sets. These kind of limits will be called universality limits from now on. The title “universal” seems justified, since the right hand side does not depend on the original measure. This limit condition may seem a bit random, but if we look more closely, we can find similar phenomena in other places. For example, the <a href="http://en.wikipedia.org/wiki/Central_limit_theorem">central limit theorem</a> says that if <script type="math/tex">X_1, X_2, \dots</script> are independent identically distributed random variables, then (under some additional conditions) we have</p>
<p style="text-align:center;">$$ \frac{\sum_{i=1}^{n} (X_i - E(X_1))}{\sqrt{n}} \to \mathcal{N}(0,\sigma^2) $$</p>
<p><a href="http://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution">in distribution</a>, where <script type="math/tex">\mathcal{N}(0,\sigma^2)</script> denotes the <a href="http://en.wikipedia.org/wiki/Normal_distribution">normal distribution </a>with mean <script type="math/tex">0</script> and variance <script type="math/tex">\sigma^2</script>, moreover <script type="math/tex">E(X_1)</script> denotes the <a href="http://en.wikipedia.org/wiki/Expected_value">expected value</a> of <script type="math/tex">X_1</script>.
Another similar statement is the <a href="http://en.wikipedia.org/wiki/Law_of_large_numbers">law of large numbers</a>. If <script type="math/tex">X_1, X_2, \dots</script> are again independent identically distributed random variables, then</p>
<p style="text-align:center;">$$ \frac{\sum_{i=1}^{n} X_i}{n} \to E(X_1) $$</p>
<p>almost everywhere.
One common thing about the central limit theorem and the law of large numbers is that a scaling appears in both limits. Although it is not clear immediately, it is also the case for universality limits.</p>
<p><strong>2. What if universality holds? </strong>If universality holds at <script type="math/tex">x_0</script>, then we have</p>
<p style="text-align:center;">$$ \lim_{n \to \infty} \frac{1}{\widetilde{K_n}(x_0,x_0,\mu)^k} R_k(x_0 + \xi_1/n, \dots, x_0 + \xi_k/n) $$</p>
<p style="text-align:center;">$$ = \lim_{n \to \infty} \det \Big( \frac{\widetilde{K_n}(x_0 + \xi_i/n, x_0 + \xi_j/n)}{\widetilde{K_n}(x_0,x_0)} \Big)_{i,j=1}^{n} $$</p>
<p style="text-align:center;">$$ = \det \Big( \frac{\sin \pi(\xi_i - \xi_j)}{\pi (\xi_i - \xi_j)} \Big)_{i,j=1}^{n}. $$</p>
<p>The following theorem says that what we did above is basically a scaling.</p>
<p><strong>Theorem 1. </strong>If <script type="math/tex">\mu</script> is a Borel measure on <script type="math/tex">\mathbb{R}</script> and<br />
(i) <script type="math/tex">\text{supp}(\mu) = E</script>,<br />
(ii) <script type="math/tex">\mu</script> is absolutely continuous with <script type="math/tex">d\mu(x) = w(x)dx</script>, where <script type="math/tex">w</script> is continuous and <script type="math/tex">w(x) > 0</script> almost everywhere on <script type="math/tex">E</script>,<br />
then for all <script type="math/tex">x_0 \in \text{int}(E)</script> we have</p>
<p style="text-align:center;">$$ \lim_{n \to \infty} \frac{n}{\widetilde{K_n}(x_0,x_0)} = \frac{1}{\omega_E(x_0)} $$,</p>
<p>where the function <script type="math/tex">\omega_E</script> <strong>depends only on the set </strong><strong><script type="math/tex">E</script> </strong>and not on the measure <script type="math/tex">\mu</script>. <script type="math/tex">\Box</script></p>
<p>For example, if we take <script type="math/tex">d\mu(x) = \frac{1}{\sqrt{1-x^2}}</script>, we see the following.</p>
<figure>
<img src="https://nonemptyspaces.files.wordpress.com/2015/06/wigner.gif" />
<figcaption>$$ n/\widetilde{K_n}(x,x,\mu) $$, where $$ n \in \{1,2,\dots,100\}.$$</figcaption>
</figure>
<p>This limit distribution is known by physicists as <a href="http://en.wikipedia.org/wiki/Wigner_semicircle_distribution">Wigner semicircle distribution</a>, and it appears frequently in random matrix theory.</p>
<p>Using Theorem 1, we have</p>
<p style="text-align:center;">$$ \lim_{n \to \infty} \frac{1}{(n\omega_E(x_0))^k} R_k(x_0 + \xi_1/n, \dots, x_0 + \xi_n/n) $$</p>
<p style="text-align:center;">$$ = \lim_{n \to \infty} \frac{\widetilde{K_n}(x_0,x_0)^k}{n^k \omega_E(x_0)^k} \frac{1}{\widetilde{K_n}(x_0,x_0,\mu)^k} R_k(x_0 + \xi_1/n, \dots, x_0 + \xi_k/n) $$</p>
<p style="text-align:center;">$$ = \det \Big( \frac{\sin \pi (\xi_i - \xi_j)}{\pi (\xi_i - \xi_j)} \Big)_{i,j=1}^{n}. $$</p>
<p>It can be clearly seen that this limit is indeed a scaling.</p>
<p><strong>3. How do we establish universality limits? </strong>I’ve talked about consequences of universality but I haven’t discussed how to actually <em>prove</em> universality limits. First let’s assume that the following theorem holds.</p>
<p><strong>Theorem 2.</strong> If <script type="math/tex">\mu</script> is the Lebesgue mesure in <script type="math/tex">[-1,1]</script> (i.e. <script type="math/tex">d\mu(x) = \chi_{[-1,1]}(x)dx</script>), then universality holds for all <script type="math/tex">x_0 \in (-1,1)</script>, that is</p>
<p style="text-align:center;">$$ \lim_{n \to \infty} \frac{\widetilde{K_n}(x_0 + a/n, x_0 + b/n, \mu)}{\widetilde{K_n}(x_0, x_0, \mu)} = \frac{\sin \pi (b-a)}{\pi (b-a)} $$</p>
<p>for <script type="math/tex">a,b</script> uniformly in compact subsets. <script type="math/tex">\Box</script></p>
<p>This theorem can be generalized in more than one ways, for example<br />
• we can study more general measures supported on <script type="math/tex">[-1,1]</script>,<br />
• or study measures supported on more complicated sets.</p>
<p><strong>Comparison method of Lubinsky. </strong>Let’s see the first one. Suppose that<br />
• <script type="math/tex">\mu</script> and <script type="math/tex">\nu</script> are two Borel measures on <script type="math/tex">[-1,1]</script>,<br />
• <script type="math/tex">x_0 \in (-1,1)</script>,<br />
• and suppose that <script type="math/tex">\mu</script> and <script type="math/tex">\nu</script> is equal in some neighbourhood of <script type="math/tex">x_0</script>.</p>
<p>The following theorem was proven by Doron S. Lubinsky, and it was a huge breakthrough in the study of universality limits.</p>
<p><strong>Theorem 3. </strong>(D. S. Lubinsky) If universality holds for <script type="math/tex">\mu</script> at <script type="math/tex">x_0</script> and both measures are “nice” in some sense (but can be much less nicer then the Lebesgue measure), then universality holds for <script type="math/tex">\nu</script> at <script type="math/tex">x_0</script>. <script type="math/tex">\Box</script></p>
<p>The theorem and the proof can be found in its full beauty in the article [Lubinsky]. I tried to avoid technical details here to compress the theorem into a fully digestible form, but if you are interested in those details (which will be discussed in a future post), I strongly recommend this article.</p>
<p>Theorem 3 can be used to establish universality limits for a bunch of measures. For example, let</p>
<p>• <script type="math/tex">\mu</script> be the Lebesgue measure on <script type="math/tex">[-1,1]</script>,<br />
• <script type="math/tex">\nu</script> be an absolutely continuous measure with <script type="math/tex">d\mu(x) = w(x) dx</script>,<br />
• <script type="math/tex">w > 0</script> almost everywhere on <script type="math/tex">[-1,1]</script>,<br />
• and <script type="math/tex">w(x) = 1</script> in some neighbourhood of <script type="math/tex">x_0</script>.</p>
<p>Then, since universality holds for the Lebesgue measure, it also holds for our much general measure <script type="math/tex">\nu</script>. The big thing about the comparison method is that before this article, universality was known only for analytic weights. As you can see in this previous example, universality can be established for continuous weights, which is a huge step.</p>
<p><strong>The polynomial inverse image method of Totik.</strong> This time let’s take a look at measures with compact support. We will extend Theorem 2 in two steps. First let <script type="math/tex">T_N(x)</script> be a polynomial of degree <script type="math/tex">N</script> and suppose that if <script type="math/tex">t_0</script> is a local extrema for <script type="math/tex">T_N</script>, then <script type="math/tex">\|T_N(t_0)\| \geq 1</script>. A polynomial like this will be called admissible. Now let <script type="math/tex">\mu</script> be a Borel measure supported on <script type="math/tex">[-1,1]</script> and suppose that universality holds for <script type="math/tex">\mu</script> at <script type="math/tex">0</script>. With this setup, the following theorem holds.</p>
<p><strong>Theorem 4.</strong> (V. Totik) If <script type="math/tex">\nu</script> is the pullback measure for <script type="math/tex">\mu</script> on <script type="math/tex">E = T_{N}^{-1}([-1,1])</script> and <script type="math/tex">T_N(x_0) = 0</script>, then universality holds for <script type="math/tex">\nu</script> at <script type="math/tex">x_0</script>. <script type="math/tex">\Box</script></p>
<p>This takes care of measures with support <script type="math/tex">T_{N}^{-1}([-1,1])</script>, where <script type="math/tex">T_N</script> is an admissible polynomial.
Now let <script type="math/tex">\mu</script> is a measure with support <script type="math/tex">K \subseteq \mathbb{R}</script> where <script type="math/tex">K</script> is an arbitrary compact set. It turns out that in some sense <script type="math/tex">K</script> can be approximated with sets of the form <script type="math/tex">T_{N}^{-1}([-1,1])</script> with arbitrary precision! Using this, we can compare our measure <script type="math/tex">\mu</script> to other measures supported on the approximating sets. Overall, the following theorem holds.</p>
<p><strong>Theorem 5.</strong> If <script type="math/tex">\mu</script> is a “nice” measure with compact support, <script type="math/tex">x_0 \in \text{supp}(\mu)</script> and <script type="math/tex">d\mu(x) = w(x) dx</script> in a neighbourhood of <script type="math/tex">x_0</script> with positive and continuous <script type="math/tex">w</script>, then universality holds at <script type="math/tex">x_0</script>. <script type="math/tex">\Box</script></p>
<p>The method of polynomial inverse images was perfected in [Totik1], where the process is described in detail. Originally it was used to prove polynomial inequalities on compact sets, but the method found many applications, for example universality limits, which can be found in [Totik2].</p>
<p><strong>Epilogue: an anecdote from Princeton</strong></p>
<p>I finish this long post with an anecdote. It can be found in numerous articles, for example [Deift].</p>
<p>Some years ago Hugh Montgomery, when his paper about the nontrivial zeros of the Riemann zeta function was published (see the begining of the post), visited the Institute for Advanced Study in Princeton. There he met Freeman Dyson, the legendary mathematical physicist. They had a tea together, and before Montgomery could describe his result to him, Dyson asked</p>
<blockquote>And did you get this?</blockquote>
<p>Then he picked up a pen and wrote down the formula</p>
<p style="text-align:center;">$$ \int_{a}^{b} 1 - \big( \frac{\sin \pi x}{\pi x} \big)^2 dx. $$</p>
<p>Montgomery was shocked, because it was exactly his result. Then Dyson replied</p>
<blockquote>If the zeros of the zeta function behaved like the eigenvalues of a random Gaussian unitary matrix, then it would be exactly the formula for the 2-point correlation function!</blockquote>
<p>I think this justifies the label “universality”.</p>
<p><strong>References.</strong>
[Deift] Percy Deift, Universality for mathematical and physical systems, <a href="http://arxiv.org/abs/math-ph/0603038">available at arXiv</a><br />
[Deift-Gioev] Percy Deift and Dimitri Gioev, Random matrix theory: invariant ensembles and universality, Courant Lecture Notes, 2009<br />
[Lubinsky] Doron S. Lubinsky, A New Approach to Universality Limits Involving Orthogonal Polynomials, Annals of Mathematics, 170 (2009), 915-939.<br />
[Tao] Terence Tao, A second draft of a non-technical article on universality, <a href="https://terrytao.wordpress.com/2010/09/14/a-second-draft-of-a-non-technical-article-on-universality/">blog post<br />
</a>[Totik1] Vilmos Totik, Polynomial inverse images and polynomial inequalities, Acta Mathematica, 187 (2001), 139-160<br />
[Totik2] Vilmos Totik, Universality and fine zero spacing on general sets, Arkiv för Mathematik, 47 (2009), 361-391</p>Tivadar Dankatheodore.danka@gmail.comThere is a phenomenon in physics and mathematics which has been captivating the minds of scientists for a long time and it is called simply “universality”. In brief, a phenomenon exhibits universality if, be as wild and diverse in microscopic scale as possible, a clear pattern emerges if one looks at it from macrosopic scale. I first encountered this phenomena from a technical point of view while studying asymptotic behavior of orthogonal polynomials, and it got me too. In this post we will see some examples of systems exhibiting universal behavior and we shall see how do orthogonal polynomials enter the picture.