Posts

Showing posts with the label CLASSIFICATION

Human Verification Using MNIST Dataset (with Code)

Image
Introduction In this paper, we will classify the handwritten digits using a multilayer neural network. We will use this classification to build the human verification system as we ask humans to write a 3-digit number and check if written correctly and validate the number entered by the user. As they are many ways to write some digits and they can be written anywhere in the box we use open CV to get the perfect size image and use ML to predict the number and their using JavaScript we verify the number. For the prediction, we are using a 3 hidden layer neural network. In the MNIST dataset, we get a 28*28 size image dataset in which each is surrounded by 4 pixels in every direction. We have achieved a 97.23% success rate of the classification of digits from the MNIST dataset. GitHub Live Demo Data Analysis and Visualization MNIST data consists of 70,000 handwritten digit images. We will follow the steps from preprocessing to predicting the digit. We will start by understanding how an ima...

Loss Functions Part-2

Image
  This is a continuation of  this Loss Functions used for Classification  As we all know that for regression problems we use Least square error as the loss function. Through this, we get a convex loss function and we can optimize by finding its global minimal. But when it comes to logistic regression the concept is completely changed, the Least Square error will give us a non-convex loss function with, more than one local minima. Here we get a wavy curve due to the non-linear sigmoid function used in the logistic regression hypothesis so it has multiple local minima which are bad for gradient Descent which is used to find minima.   Cross-Entropy Loss   This is the most common setting for classification problems. Cross-entropy loss increases as the predicted probability diverge from the actual label. An important aspect of this is that cross-entropy loss penalizes heavily the predictions that are confident but wrong . We can’t give equal weight to all false resul...