Posts

Showing posts from May, 2021

Human Verification Using MNIST Dataset (with Code)

Image
Introduction In this paper, we will classify the handwritten digits using a multilayer neural network. We will use this classification to build the human verification system as we ask humans to write a 3-digit number and check if written correctly and validate the number entered by the user. As they are many ways to write some digits and they can be written anywhere in the box we use open CV to get the perfect size image and use ML to predict the number and their using JavaScript we verify the number. For the prediction, we are using a 3 hidden layer neural network. In the MNIST dataset, we get a 28*28 size image dataset in which each is surrounded by 4 pixels in every direction. We have achieved a 97.23% success rate of the classification of digits from the MNIST dataset. GitHub Live Demo Data Analysis and Visualization MNIST data consists of 70,000 handwritten digit images. We will follow the steps from preprocessing to predicting the digit. We will start by understanding how an ima...

Learning Optimization(SGD) Through Examples

Image
Introduction The entire aim of optimization is to minimize the cost function . We will learn more about optimization in the later sections of the paper. Batch Gradient Descent Here we sum up all examples on every iteration while performing the updates for the weight or parameters. So for every update in weights, we need to sum over all examples. The weights and bias get updated based on gradient and learning rate(n). Mainly advantages when there is a straight trajectory towards minimum and it has an unbiased estimate of gradients and fixed learning rate during training. Disadvantageous when we use vector implementation because we have to go over all the training set again and again. Learning happens when we go through all data even when some examples are reductant and with no contribution to the updating. Stochastic Gradient Descent  Here unlike Branch Gradient Descent, we update the parameters on each example so learning happens on every example. So it converges more quickly than...