Dennis O'Keeffe
4 min readOct 5, 2021

--

Heading image

This is Day 32 of the #100DaysOfPython challenge.

This post will demonstrate how to use regression with regularization using scikit-learn.

We will be working from the code written in part three.

Source code can be found on my GitHub repo okeeffed/regression-with-scikit-learn-part-four.

Prerequisites

  1. Familiarity Conda package, dependency and virtual environment manager. A handy additional reference for Conda is the blog post “The Definitive Guide to Conda Environments” on “Towards Data Science”.
  2. Familiarity with JupyterLab. See here for my post on JupyterLab.
  3. These projects will also run Python notebooks on VSCode with the Jupyter Notebooks extension. If you do not use VSCode, it is expected that you know how to run notebooks (or alter the method for what works best for you).

Getting started

Let’s first clone the code from part three into the regression-with-scikit-learn-part-four directory.

We can now begin adding code to our notebook at docs/linear_regression.ipynb.

What is Regularized Regression?

“Regularization” is a method to give a penalty to the model in order to prevent overfitting. The penalty is a function of the model’s complexity. The more complex the model, the higher the penalty.

“Coefficient estimates are constrained to zero. The size (or magnitude) of the coefficient, as well as the error term, are penalized.” — statisticshowto.com

Two commonly used methods of regularization are:

  1. Ridge Regression
  2. Lasso Regression

We will be exploring both today.

Ridge Regression

Ridge regression tunes a model that is used to analyze data that has multicollinearity.

Multicollinearity is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. — Wikipedia

When the issue of multicollinearity occurs, least-squares are unbiased and variances are large. As a result, our predicted values will result to be far away from the actual values.

By using regularization, we can reduce the variance of our predictions. It shrinks the parameters to prevent multicollinearity. It ill also reduce the model complexity by shrinking coefficients.

To use Ridge Regression in our work, we ourselves need to pick the value of alpha. It is similar to picking the value k for k-Nearest Neighbors.

The process of finding the alpha that works best is known as Hyperparameter Tuning.

The value of alpha controls model complexity. Alpha = 0: We get back OLS (which can lead to overfitting). Alpha being large means that large residuals are being over-penalized. This can lead to underfitting.

In a new cell, let’s add the following:

By running our code, we can see that the R^2 value is 0.6996938275127313.

Lasso Regression

Lasso regression is a type of linear regression that uses shrinkage. Shrinkage is where data values are shrunk towards a central point, like the mean. This type is very useful when you have high levels of muticollinearity or when you want to automate certain parts of model selection, like variable selection/parameter elimination.

Applying Lasso regression is similar to Ridge. In a new cell, let's add the following:

Scoring our split gives us an R^2 value of 0.5950229535328551.

Lasso for feature selection

One of the important aspects of Lasso regression is using it to select important features of a dataset.

It does this because it tends to shrink less important features down to zero. Coefficients that are not shrunk to zero are considered more important by the algorithm.

To demonstrate, let’s add one last cell:

The output is as follows:

Lasso coefficients assigned to features

This diagram works as a great sanity check to confirm what we had worked out earlier: that the number of rooms is an important feature in predicting the price.

Summary

Today’s post looked into both Ridge and Lasso regression, as well as how to apply those methods using Scikit Learn.

Resources and further reading

  1. Conda
  2. JupyterLab
  3. Jupyter Notebooks
  4. “The Definitive Guide to Conda Environments”
  5. okeeffed/regression-with-scikit-learn-part-four
  6. Multicollinearity — Wikipedia
  7. Regularized Regression — statisticshowto.com

Photo credit: chilis

Originally posted on my blog. To see new posts without delay, read the posts there and subscribe to my newsletter.

I write content for AWS, Kubernetes, Python, JavaScript and more. To view all the latest content, be sure to visit my blog and subscribe to my newsletter. Follow me on Twitter.

--

--