# Multiple Linear Regression: Explained, Coded Special Cases

*This article was first published by IBM Developer at developer.ibm.com, but authored by Casper Hansen. Here is the Direct link.*

Linear Regression is famously known for being a simple algorithm and a good baseline to compare more complex models to. In this article, we explore the algorithm and turn the math into code, and then we run the code on a dataset, to get predictions on new data.

# Table Of Contents (Click To Scroll)

- What Is Linear Regression?
- Multiple Linear Regression
- Special Case 1: Simple Linear Regression
- Special Case 2: Polynomial Regression

## What Is Linear Regression?

The Linear Regression model consists of one equation of linearly increasing *variables* (also called *parameters* or *features*), along with a coefficient estimation algorithm called least squares, which attempts to figure out the best possible coefficient given a variable.

Linear regression models are known to be simple and easy to implement, because there is no advanced mathematical knowledge needed, except for a bit of linear algebra. For this reason, many people choose to use a linear regression model as a baseline model, to compare if another model can outperform such a simple model.

## Multiple Linear Regression

Multiple linear regression is a model that can capture the a linear relationship between multiple variables/features – assuming that there is one. The general formula for multiple linear regression looks like the following:

- $beta_0$ is known as the intercept
- $beta_1$ to $beta_i$ are known as coefficients
- $x_1$ to $x_i$ are the features of our dataset
- $varepsilon$ are the residual terms

We can also represent the formula for linear regression in vector notation. When representing the formula in vector notation, we have the advantage of using some operations from linear algebra, which in turn makes it easier to code.

Source - Continue Reading: https://mlfromscratch.com/linear-regression-from-scratch/