using Random
using ForwardDiff
using Distributions
using Statistics
using Plots
Gradient Descent
Illustrated using Julia
What is Gradient Descent?
Gradient Descent, or Stochastic Gradient Descent, is a machine learning algorithm that optimizes parameter estimates for a model by (slowly) adjusting the estimates according to gradients (partial derivatives) and some loss function. The 7 “steps” of gradient descent, according to Jeremy Howard’s fastai course on deep learning, are:
- Initialize the parameters;
- Use parameters and input data to predict an outcome;
- Calculate the model’s loss;
- Calculate the gradient (partial derivative for each model parameter);
- Update the parameter estimates using the gradient multiplied by some learning rate;
- Repeat steps 2-5;
- Stop according to some criteria (number of iterations, hitting a threshold re: model improvement).
The gradient descent algorithm is flexible enough to optimize the parameter estimates of arbitrary (but differentiable) functions, which makes it useful for machine learning, and deep learning in particular.
A Simple Illustration
To illustrate a simple version of this, let’s consider the quadratic function. Imagine this is the function we want to minimize – that is, we want to find the value of x
that produces the smallest y
. In a quadratic function, we know there’s 1 solution that minimizes y
, which is x=0
, so it’s a clean illustration.
f(x) = x^2
= -5:0.1:5
x = f.(x)
y
plot(x, y, label="Function to Minimize")
Next, we can just initiate a random value, z
that we want to evaluate the function at:
= -3.5
z scatter!([z], f.([z]), label="Starting Value")
We then take the gradient (the derivative in this case, but it’s the same thing) of the function with respect to z
, which gives us the rate of change of f(x)
at x = z
= ForwardDiff.derivative(f, z) grad_f
-7.0
So the way we think about this is that the slope of the line that’s tangent to f(x)
at x = -3.5
is -7
. And flashing back to calculus, we know that if f(x) = x^2
, then f'(x) = 2x
, and 2*(-3.5) = -7
, so this all checks out.
From here, we can write out the derivative f_prime()
, plus a convenience function that will plot the tangent line at \(x_0\) accross some range of values, x
.
f_prime(x) = 2x
tangent_line(x₀) = (x -> f(x₀) + f_prime(x₀) * (x - x₀))
plot!(tangent_line(z), -4.5:0.1:-2, label="Tangent at x = -3.5")
Then we update the value of z (which we’re trying to minimize) by multiplying the gradient (-7) by some learning rate, which we want to be pretty small so parameter updates aren’t bouncing around too much.
= 0.1
lr
= z - lr * grad_f
z2
scatter!([z2], f.([z2]), label="Update 1")
plot!(tangent_line(z2), -4.5:0.1:-1, label="Tangent at Update 1")
And then we run the updates a few more times, etc.
= ForwardDiff.derivative(f, z2)
grad_f2
= z2 - lr * grad_f2
z3
scatter!([z3], f.([z3]), label="Update 2")
We’d continue this until we hit some stopping criteria, like a predefined number of iterations or some threshold where our loss stops improving by much.
Estimating Linear Regression Parameters
Like we said earlier, the algorithm is pretty flexible, so we can use it to estimate the parameters of a linear regression model.
Generate Data
First, we generate some fake data
Random.seed!(0408)
#x data
= hcat(ones(1000), randn(1000, 3))
𝐗
#ground truth betas
= [0.5, 1, 2, 3]
𝚩
#multiply data by betas
f₁(X) = X * 𝚩
#make some error
= rand(Normal(0, 0.5), size(𝐗)[1])
ϵ
#generate y
= f₁(𝐗) + ϵ; y
Define a Loss Function
Once we have fake data, we want to define some loss function to minimize. This tells us how well our model is predicting the training data, so small losses are better. There are a few options we could use, but mean squared error is the probably most straightforward:
function mse_loss(X, y, b)
= X * b
ŷ
= mean((y .- ŷ) .^ 2)
l
return l
end
mse_loss (generic function with 1 method)
Define a training function
Now we can write a function that implements the algorithm. In this fucntion, we:
- initialize some random beta values
- initialize error as some very large number (the init value doesn’t really matter as long as it’s greater than the function’s
tol
parameter) - initialize the number of iterations (
iter
) at 0 - define a helper function
d()
to get the gradient of the loss function at a given set of betas - define a loop that updates the beta values by the learning rate * the gradients until convergence
function grad_descent(X, y; lr=0.01, tol=0.01, max_iter=1_000, noisy=false)
#randomly initialize betas
= rand(size(X)[2])
β
#init error to something large
= 1e10
err
#initialize iterations at 0
= 0
iter
#define a function to get the gradient of the loss function at a given set of betas
d(b) = ForwardDiff.gradient(params -> mse_loss(X, y, params), b)
while err > tol && iter < max_iter
-= lr * d(β)
β = mse_loss(X, y, β)
err if (noisy == true)
println("Iteration $(iter): current error is $(err)")
end
+= 1
iter end
return β
end
grad_descent (generic function with 1 method)
Estimate βs
To estimate the betas, we just run the function
= grad_descent(𝐗, y) b
4-element Vector{Float64}:
0.4857258886600002
1.0162438907395914
2.0047384522328064
3.0183732778203276
Check Solution Against Base Julia Solver
To prove that we got the right parameters (which we already know, since we chose them when we simulated the data), we can just check them again the answers provided by the base Julia solver.
\ y .≈ b 𝐗
4-element BitVector:
1
1
1
1
And we can see they check out.