Posts

Showing posts with the label LOSS FUNCTIONS

Loss Functions Part-2

Image
  This is a continuation of  this Loss Functions used for Classification  As we all know that for regression problems we use Least square error as the loss function. Through this, we get a convex loss function and we can optimize by finding its global minimal. But when it comes to logistic regression the concept is completely changed, the Least Square error will give us a non-convex loss function with, more than one local minima. Here we get a wavy curve due to the non-linear sigmoid function used in the logistic regression hypothesis so it has multiple local minima which are bad for gradient Descent which is used to find minima.   Cross-Entropy Loss   This is the most common setting for classification problems. Cross-entropy loss increases as the predicted probability diverge from the actual label. An important aspect of this is that cross-entropy loss penalizes heavily the predictions that are confident but wrong . We can’t give equal weight to all false resul...

Loss Functions Part - 1

Image
Introduction  First let us understand, how the machine learns from the given data. Actually, it is learning the relationship within the data. There are 3 steps in which the machine learns first it will predict an output. Mainly the first prediction is mostly random. Then it calculates the error and then learns and then this process happens many times. The error goes on reducing cost function is also known as loss function. If the cost functions are convex, then it is easier to calculate the error and minimize it as the global and local minima. But not all cost functions are convex in nature. We will understand the error functions slowly by looking at examples by observing their graphs.  Loss Functions  Why we need Loss Functions?  Loss function actually measures how good a prediction the result/outcome made by the model is so, it’s a measure of how good is the mode l.  Is Cost Function the same as the Loss function?  In our day-to-day ...