Everything you need to know about Model Fitting in Machine Learning

What is Model Fitting?

Different types of model fitting
Different types of model fitting

Model fitting is a measure of how well a machine learning model generalizes to similar data to that on which it was trained. The generalization of a model to new data is ultimately what allows us to use machine learning algorithms every day to make predictions and classify data. The definition of a good model fit is one that accurately approximates the output of an unknown input when it is provided with unknowable inputs. A model’s fitting is the process of adjusting its parameters in order to improve its accuracy. In order to generate a machine learning model, a machine learning algorithm is run on data for which the target variable (“labeled” data) is known. Afterward, the model’s results are compared with the real, observed values of the target variable. The next step is to adjust the algorithm’s standard parameters in order to reduce the error level and make the model more accurate. In order to make accurate predictions, the model must repeat this process several times.

The cause of poor performance in machine learning models is either overfitting or underfitting the data. A well-balanced model produces more accurate outcomes. A model that is overfitted matches the data too closely. A model that is under fitted doesn’t match closely enough.

Why is Model Fitting important?

Understanding model fit is important for understanding the root cause of poor model accuracy. In fact, overfitting and underfitting are the two biggest causes of the poor performance of machine learning algorithms. Hence, model fitting is the essence of machine learning. If our model doesn’t fit our data correctly, the outcomes it produces will not be accurate enough to be useful for practical decision-making. Model fitting is an automatic process that makes sure that our machine learning models have the individual parameters best suited to solve our specific real-world business problems accurately. Ideally, you want to select a model at the sweet spot between underfitting and overfitting. This is the goal, but it is very difficult to do in practice.

A brief about underfitting –

A machine learning algorithm is said to have underfitting when it is unable to capture the relationship between the input and output variables accurately. It generates a high error rate on both the training set and unseen data. Hence, underfitting destroys the accuracy of our machine learning model. It occurs when the data available to build a model is less or maybe when the model needs more training time and less regularization. High bias and low variance are good indicators of underfitting. Follow this link to know more about bias and variance for model selection.

Example: We can understand the underfitting using below output of the linear regression model:

Overfitting and Underfitting in Machine Learning - Javatpoint
Undefitted model

As we can see from the above diagram, the model is unable to capture the data points present in the plot.

How to avoid underfitting –

  1. Increase the duration of the training.
  2. Increasing the number of features by performing feature engineering.
  3. Remove noise from the data.
  4. Increase model complexity.

A brief about overfitting –

A machine learning algorithm is said to have overfitting when we see that the model performs well on the training data but does not perform well on the evaluation data. When this happens, the algorithm, unfortunately, cannot perform accurately against unseen data, defeating its purpose. When a model gets trained with so much data, it starts learning from the noise and inaccurate data entries in our data set. Then the model does not categorize the data correctly, because of too many details and noise. Low bias and high variance are good indicators of overfitting. Here is a detailed series on how to reduce Overfitting.

The chances of occurrence of overfitting increase as much we provide training to our model. It means the more we train our model, the more chances of occurring the overfitted model. Overfitting is the main problem in Supervised Learning

Example: We can understand the underfitting using below output of the linear regression model:

Overfitting and Underfitting in Machine Learning
Overfitted Model

From the above graph, we can see that the model attempts to cover all the data points. It may seem efficient, but it’s not. Regression models aim to find the best fit line, but here we do not have any best fit, so it will generate prediction errors.

How to avoid overfitting –

  1. Increase training data.
  2. Early stopping during the training phase.
  3. Ridge Regularization and Lasso Regularization.
  4. Feature reduction and dropouts.

A brief about a good-fit model –

Ideally, you want to select a model at the sweet spot between underfitting and overfitting where the machine learning model makes the predictions with 0 error,. This is the goal, but it is very difficult to do in practice. As when we train our model for a time, the errors in the training data go down, and the same happens with test data. But if we train the model for a long duration, then the performance of the model may decrease due to the overfitting, as the model also learn the noise present in the dataset. The errors in the test dataset start increasing, so the point, just before the raising of errors, is the good point, and we can stop here for achieving a good model. There are two additional techniques you can use to help find the sweet spot in practice: resampling methods and a validation dataset.

To understand it, we will have to look at the performance of our model with time, while it is learning from the training dataset.

 Underfitting vs. good fit vs. overfitting (credit: Kaggle).
Above: Underfitting vs. good fit vs. overfitting

Summary –

In this article, I tried to explain model fitting in simple terms. If you have any questions related to the post, put them in the comment section and I will do my best to answer them. Also, do check out interesting links related to this topic below.

  1. Overfitting vs Underfitting: A complete example – https://towardsdatascience.com/overfitting-vs-underfitting-a-complete-example-d05dd7e19765
  2. Overfitting and Underfitting in machine learning – https://www.youtube.com/watch?v=W-0-u6XVbE4
  3. Bias and Variance in-depth – https://www.youtube.com/watch?v=BqzgUnrNhFM&t=30s
  4. Ridge and Lasso Regression – https://www.youtube.com/watch?v=9lRv01HDU0s
  5. Model complexity – https://www.youtube.com/watch?v=HUb6VpGHv1w

Similar Posts

2 Comments

  1. І do not even know һow I endеd up here, but I thought this
    post was good. I don’t know ᴡһo yοu are bᥙt ϲeгtainly you are going tߋ a famous blogger if
    you aren’t already 😉 Cheers!

Leave a Reply to countably Cancel reply

Your email address will not be published.