Matrix multiplication is not easy to understand.
Even looking at the definition used to make me sweat, let alone trying to comprehend the pattern. Yet, there is a stunningly simple explanation behind it.
Let's pull back the curtain!
First, the raw definition. This is how the product of and is given. Not the easiest (or most pleasant) to look at.
We are going to unwrap this. Here is a quick visualization before the technical details. The element in the -th row and -th column of is the dot product of 's -th row and 's -th column.
Now, let's look at a special case: multiplying the matrix with a (column) vector whose first component is , and the rest is . Let's name this special vector . Turns out that the product of and is the first column of .
Similarly, multiplying with a (column) vector whose second component is and the rest is yields the second column of .
That's a pattern!
By the same logic, we conclude that times equals the -th column of .
This sounds a bit algebra-y, so let's see this idea in geometric terms. Yes, you heard right: geometric terms.
Matrices represent linear transformations. You know, those that stretch, skew, rotate, flip, or otherwise linearly distort the space. The images of basis vectors form the columns of the matrix.
We can visualize this in two dimensions.
Moreover, we can look at a matrix-vector product as a linear combination of the column vectors. Make a mental note of this, because it is important.
(If unwrapping the matrix-vector product seems too complex, I got you. The computation below is the same as in the above, only in vectorized form.)
Now, about the matrix product formula. From a geometric perspective, the product is the same as first applying , then to our underlying space.
Recall that matrix-vector products are linear combinations of column vectors. With this in mind, we see that the first column of is the linear combination of 's columns. (With coefficients from the first column of .)
We can collapse the linear combination into a single vector, resulting in a formula for the first column of . This is straight from the mysterious matrix product formula.
The same logic can be applied, thus giving an explicit formula to calculate the elements of a matrix product.
Linear algebra is powerful exactly because it abstracts away the complexity of manipulating data structures like vectors and matrices. Instead of explicitly dealing with arrays and convoluted sums, we can use simple expressions .
That's a huge deal.
Peter Lax sums it up perfectly: "So what is gained by abstraction? First of all, the freedom to use a single symbol for an array; this way we can think of vectors as basic building blocks, unencumbered by components."