Gradian Descent with Simple Example¶
This is a python implementation of the gradiant descent with very basic example¶
the concepts are drawn from the article¶
Keep it simple! How to understand Gradient Descent algorithm
In [4]:
import numpy as np
import matplotlib.pyplot as plt
l=[[2.0,3,4,5],[2,2.5,2.5,4]]
data=np.array(l)
Y=data[1,:]
X=data[0,:]
# data
print('input dataset : ',data)
#select random values for the equation y=ax + b
[a,b]=np.random.randint(1,2,2)
print('initial random values of a,b ',a,b)
lr=0.01
epochs=30
plt.scatter(data[0,:],data[1,:])
for i in range(0,epochs):
YP=a*X+b
SSE=0.5*sum((Y-YP)**2)
print('SSE :{0:.2f}'.format(SSE))
SSE_grad_a=sum(-(Y-YP))
SSE_grad_b=sum(-(Y-YP)*X)
a=a-lr*SSE_grad_a
b=b-lr*SSE_grad_b
print('new values of a,b {0:.2f}, {0:.2f}'.format(a,b))
plt.plot(X,a*X+b)
The start line is y=x+1 the one at the top (blue line), this means a=b=1 a,b are selected randomly and initialized to 1, 1. SSE is calculated for this line to be 6.27, this is reduced with gradiant descent algorithm to be 0.13, the best a,b values to fit the data are 0.79, 0.79 which belong to the line at the bottom of the graph (green line), the green line is the best fit line that fit the data very well.¶
In [ ]:
No comments:
Post a Comment