2 min read

ML Algorithms from scratch

Table of Contents

Linear Regression

Code

import numpy as np

class LinearRegression:
    def __init__(self, lr=0.01, epochs=1000):
        self.lr = lr
        self.epochs = epochs

    def fit(self, X, y):
        n, features = X.shape
        self.weights = np.zeros(features)
        self.bias = 0

        for _ in range(self.epochs):
            y_pred = np.dot(X, self.weights) + self.bias
            error = y_pred - y

            dw = (1 / n) * np.dot(X.T, error)
            db = (1 / n) * np.sum(error)

            self.weights -= self.lr * dw
            self.bias -= self.lr * db

    def predict(self, X):
        return np.dot(X, self.weights) + self.bias

X_train = np.array([[1, 2], [2, 3], [3, 4]])
y_train = np.array([10, 15, 20])
X_test = np.array([[4, 5], [5, 6]])

lin_reg = LinearRegression(lr=0.01, epochs=1000)
lin_reg.fit(X_train, y_train)
print("Predictions:", lin_reg.predict(X_test))

Overview

Linear regression is one of the fundamental supervised algorithm and is used to predict a continuous target variable (to regress numbers) based on the input (input features).

It does this by modeling the relationship between the variables by fitting them on a straight line, essentially minimizing the difference between predicted and actual values.

If you remember from your high-school geometry classes, equation of a line is given by:

y=mx+cy = mx + c

Similary, as we are fitting on a straight line, the linear regression equation is given by:

y=wx+by = wx+b

where ww is (weight) and bb (bias) and these are learned from the data.

Update Rules

To be continued…