My introduction to univariate linear regression using gradient descent was a hands-on experience in JavaScript.
const LEARNING_RATE = 0.000001;
let m = 0;
let b = 0;
const hypothesis = x => m * x + b;
const learn = (alpha) => {
if (x.length <= 0) return;
let sum1 = 0;
let sum2 = 0;
for (var i = 0; i < x.length; i++) {
sum1 += hypothesis(x[i]) - y[i];
sum2 += (hypothesis(x[i]) - y[i]) * x[i];
}
b = b - alpha * sum1 / (x.length);
m = m - alpha * sum2 / (x.length);
}
// continuing the learning process until convergence is achieved with learn(LEARNING_RATE);
The adjustment of the slope for m in the hypothesis function was rapid, however, the intersection at the y-axis seemed to be resistant to change. I resorted to utilizing a distinct learning rate for b to address this issue.
const learn = (alpha) => {
if (x.length <= 0) return;
let sum1 = 0;
let sum2 = 0;
for (var i = 0; i < x.length; i++) {
sum1 += hypothesis(x[i]) - y[i];
sum2 += (hypothesis(x[i]) - y[i]) * x[i];
}
b = b - 100000 * alpha * sum1 / (x.length);
m = m - alpha * sum2 / (x.length);
}
I am looking for guidance on what might be going wrong with the algorithm. The code can be accessed via a GitHub repository and further information can be found in this article.