In the history of science, few milestones are as significant as inventing the wheel. Even among these, differentiation is a highlight: with calculus, Newton essentially created modern mechanics as we know it. Differentiation enables space travel, function optimization, or even epidemiological models. In machine learning, derivatives are key for training deep neural networks.
However, its importance is not obvious from the mathematical definition: if is an arbitrary function, it is said to be differentiable at if the limit
exists, which is called its derivative. This definition is well known but loaded with concepts that are often left unexplained. In this post, our goal is to understand what the derivative really is, how to extend the it to multiple variables, and how it allows us to build models of the world around us. Let's get to it!
Differentiation as the rate of change
Instead of jumping back straight into the mathematical definition, let's start our discussion with a straightforward example: a point-like object moving along a straight line. The straight line can be modelled with the real numbers , so it makes sense to describe the motion of our object with the function , mapping a point in time to a position . Something like this below.
Our goal is to calculate the object's speed at a given time. In high school, we learned that
To put this into a quantitative form, if are two arbitrary points in time, then
Expressions like are called differential quotients. Note that if the object moves backwards, the average speed is negative.
The average speed has a simple geometric interpretation. If you replace the object's motion with a constant velocity motion moving at its average speed, you'll end up at the same place. In graphical terms, this is equivalent of connecting and with a single line. The average speed is just the slope of this line, as you can see below.
Given this, we can calculate the exact speed at a single time point , which we'll denote with . ( is short for velocity.) The idea is simple: the average speed in the small time-interval between and should get closer and closer to if is small enough. ( can be negative as well.)
So,
if the above limit exists.
Following our geometric intuition, we can notice that is simply the slope of the tangent line of at . This can be beautifully illustrated when we visualize the ratio for a few -s.
Keeping this in mind, we are ready to introduce the formal definition. (The one that we mentioned earlier, but it actually makes sense this time.)
Definition. (Differentiability.) Let be an arbitrary function. We say that is differentiable at if the limit
exists. If so, is called the derivative of at .
In pure English, if describes a time-distance function of a moving object, then the derivative is simply its speed. In other words, the derivative quantifies the rate of change. Note that differentiability is a property of the function and the point . As we shall see later, some functions are differentiable at some points but not differentiable at others.
Don't let the change in notation from and to and confuse you, this means exactly the same as before. Speaking of confusion, sometimes, the multiple notations for differentiation can be difficult to interpret. For instance, can denote the variable of and the exact point where the derivative is taken. To clear this up, here is a quick glossary of terms to clarify the difference between derivative and derivative function.
- : derivative of with respect to the variable at the point . This is a scalar, also denoted with .
- : derivative function of with respect to the variable . This is a function, also denoted with .
Differentiability is smoothness
Now that we understand how derivatives express the rate of change, we'll look at things from a more abstract viewpoint: what does differentiability mean? Mind you, I am not talking about the value of the derivative itself but the fact that it exists. To make my point clear, let's consider two examples.
Example 1. . Here, we have
So, is differentiable everywhere and . No surprise here. If you are a visual person, this is how the tangents look.
The graph of the function is smooth everywhere. However, this is not always the case, leading us to the second example.
Example 2. at . For this, we have
Since
this limit does not exist. Thus, is not differentiable at .
It is worth drawing a picture here to enhance our understanding of differentiability. Recall that the value of the derivative at a given point equals the slope of the tangent line to the function's graph. Since has a sharp corner at , the tangent line is not well-defined, as multiple possibilities exist.
In other words, differentiability means no sharp corners in the graph. This is why differentiable functions are often called smooth.
From this perspective, differentiability means manageable behavior: no wrinkles, corners, or sharp changes in value. Next, we'll see an equivalent definition of differentiability involving local approximation with a linear function.
Differentiation as the best local linear approximation
Do you recall how we introduced the definition of the derivative? Essentially, we approximated the dynamics of a moving point-like object with a constant velocity motion on smaller and smaller time intervals, eventually shrinking down the gap to zero. From the perspective of mechanics, differentiation is the same as swapping the motion with a constant velocity one in a given instant.
We can make this idea mathematically precise with the following theorem. (Yes, a theorem. Don't be scared. Theorems and proofs are just crystallized forms of logically correct statements.)
Theorem. (Differentiation as a local linear approximation.) Let be an arbitrary function. The following are equivalent.
(a) is differentiable at .
(b) there is an such that
holds as .
(Recall that the small O notation means that the function is an order of magnitude smaller around than the function . That is,
The in the above theorem is going to be the derivative . In other words, is locally approximated with the linear function .)
Proof. To show the equivalence of two statements, we have to prove that differentiation implies the desired property and vice versa. Although this might seem complicated, it is straightforward and entirely depends on how functions can be written as their limit plus an error term.
(a) (b). The existence of the limit
implies that we can write the slope of the approximating tangent in the form
where . With some simple algebra, we obtain
Since the error term tends to zero as goes to , , which is what we wanted to show.
(b) (a). Now, repeat what we did in the previous part, just in reverse order. We can rewrite
in the form
which, according to what we have used before, implies that
So, is differentiable at and its derivative is .
Notice that in the variable, the expression defines a linear function. In fact, this is the equation of the tangent line! The expression
tells us that around , equals a linear function plus a small error. You might ask, why is this good for us? For one, this form will work in higher dimensions, as opposed to the limit of differential quotients. Let's take a look!
Derivatives of multivariable functions
For a single variable function, we defined the derivative as the limit of difference quotients
where and are real numbers. For a multivariable function , the difference quotients are not defined. Why? Because division with the vector doesn't make sense.
To see what we can do here, let's build our intuition using functions of two variables. (That is, those that are defined on the plane.) In this case, the graph is a surface. For example,
looks like this below.
We immediately see that the concept of the tangent line is not well defined since we have many tangent lines to a given point on the surface. In fact, we have a whole plane of them, but more on those later. This is called the tangent plane.
However, this tangent plane contains two special directions. Suppose we are looking at the tangent plane at . For every multivariable function, fixing all but one variable is a function of a single variable. In our case, we would have
and
We can visualize these functions by slicing the surface with a vertical plane perpendicular to one of the axes. Where the plane and the surface meet is the graph of or , depending on which plane you use. This is how it looks.
We can define the derivatives as we have done in the previous section for these functions. These are called partial derivatives and they play an essential role in generalizing our peak finding algorithm. To formalize it mathematically, they are defined by
The values of partial derivatives are the slopes of the tangent plane in the direction parallel with or the axis. The direction of the steepest ascent is given by the gradient, defined by
(If you are familiar with the famous gradient descent optimization algorithm, this is why the gradient determines the direction of the step.)
Differentiation as a local linear approximation, revisited
So, instead of having a derivative, we have one for each variable. Can we find a pattern that meaningfully relates all of these partial derivatives to each other? Yes, and this is where the already familiar linear approximations come into the picture. Recall that for a differentiable univariate function , we have
and this is going to be the key to defining the analogue of differentiability.
Definition. (Differentiability in multiple variables.) Let be an arbitrary multivariable function. is differentiable at if there exists a such that
holds, where denotes the dot product of the vectors . is called the gradient of at .
This example shows the importance of looking at mathematical objects from several different directions. Sometimes, an alternate viewpoint can help to extend the scope of definitions significantly. Just like differentiation and the best linear approximation.