| |

Backward Propagation in Artificial Neural Network

Backpropagation is the tool of neural network training. It is a way to adjust the neural network weights based on the error rate obtained in the previous iteration. Proper adjustment of weights can reduce the error rate and increase the generalizability of the model.Backpropagation in neural networks is a shortened form of “error propagation”. This is the standard method for training artificial neural networks. This method helps to calculate the slope of the loss function for all weights in the network. The neural network backpropagation algorithms use…

Ridge Regression (now with interactive graphs!!!)
| |

Ridge Regression (now with interactive graphs!!!)

So… Ridge Regression is a modified version of Linear Regression. and a classic example of regularization using L2 penalty. So to learn about Ridge Regression, you have to make sure you understand Linear Regression. If you don’t then click here. If you don’t know what Gradient Descent is, then click here. It is an absolute must that…

Gradient Descent (now with a little bit of scary maths)
| |

Gradient Descent (now with a little bit of scary maths)

Buckle up Buckaroo because Gradient Descent is gonna be a long one (and a tricky one too). The whole article would be a lot more “mathy” than most articles as it tries to cover the concepts behind a Machine Learning algorithm called Linear Regression. If you don’t know what Linear Regression is, go through this article once. It would help…

|

XGBoost

XGBoost stands for “Extreme Gradient Boosting”. XG Boost works only with the numeric variables. It is a part of the boosting technique in which the selection of the sample is done more intelligently to classify observations. This algorithm trains in a similar manner as GBT trains, except that it introduces a new method for constructing…

|

Gradient Boosting

In gradient boosting decision trees, we combine many weak learners to come up with one strong learner. The weak learners here are the individual decision trees. All the trees are connected in series and each tree tries to minimize the error of the previous tree. Due to this sequential connection, boosting algorithms are usually slow…

|

AdaBoost

AdaBoost is short for Adaptive Boosting.Generally, AdaBoost is used with short decision trees. What Makes AdaBoost Different ? AdaBoost is similar to Random Forests in the sense that the predictions are taken from many decision trees. However, there are three main differences that make AdaBoost unique: First, AdaBoost creates a forest of stumps rather than…

|

BOOSTING

The term ‘Boosting’ refers to a family of algorithms that converts weak learners to strong learners. Let’s understand this definition in detail by solving a problem: Let’s suppose that, given a data set of images containing images of cats and dogs, you were asked to build a model that can classify these images into two…

|

1. Random forest

Random forests are collections of trees, all slightly different. To say it in simple words: A random forest classifier builds multiple decision trees and merges them together to get a more accurate and stable prediction. It generally improves decision trees’ decisions. The greater number of trees in the forest leads to higher accuracy and prevents…

|

A. BAGGING

The term “bagging” comes from bootstrap + aggregating. The idea behind bagging is combining a bunch of individual weak models to learn the same task with the same goal in a parallel way. Each weak model is trained by a random subset of the data sampled with replacement (bootstrapping), and then the models’ predictions are…

|

2. Ensemble Learning Methods

“Unity is strength”. This old saying expresses pretty well the underlying idea that rules the very powerful “ensemble methods” . Ensemble methods* are techniques that combine the decisions from several base machine learning (ML) models to find a predictive model to achieve optimum results. This algorithm can be any machine learning algorithm such as logistic…