There is one big reason we love the logarithm function in machine learning.
Logarithms help us reduce complexity by turning multiplication into addition. You might not know it, but they are behind a lot of things in machine learning.
First, let's start with the definition of the logarithm. The base logarithm of is simply the solution of the equation .
Despite its simplicity, it has many useful properties that we take advantage of all the time.
You can think of the logarithm as the inverse of exponentiation. Because of this, it turns multiplication into addition:
(The base of a logarithm is often assumed to be a fixed constant. Thus, it can be omitted.) Exponentiation does the opposite: it turns addition into multiplication.
Why is the property useful? Because we can use it to calculate gradients and derivatives!
Training a neural network requires finding its gradient. However, lots of commonly used functions are written in terms of products.
As you can see, this complicates things:
By taking the logarithm, we can compute the derivative as it turns products into sums:
This method is called logarithmic differentiation. One example where this is useful is the maximum likelihood estimation.
Given a set of observations and a predictive model, we can write this in the following form.
Believe it or not, this is behind the mean squared error!
Every time you use this, logarithms are working in the background.