Home > A Starter's guide to Neural Networks in Python

A Starter's guide to Neural Networks in Python

A neural network is exactly what it says in the name. It is a network of neurons that are used to process information. To create neural networks, scientists looked at the most advanced data processing machine at the time - the brain. Our brains process information using networks of neurons. They receive an input, process it, and accordingly output electric signals to the neurons it is connected to. Using bio-mimicry, we are able to apply the architecture of our brains to further the field of artificial intelligence.

After understanding the structure of neural networks, the next question which comes to our mind is how a neural network knows what biases and weights to use. Neural networks often start off with random weights and biases, but then they train themselves over and over again till they reach the peak performance. They do this by calculating the amount of error they currently have. This is called the cost of the neural network. This is calculated by finding the difference between the network’s prediction, and the desired result and finding the sum of those error values’ squares (target - output).

The entire goal of training the neural network is to minimize the cost. Neural networks do this using a process called backpropagation. This seems like a complicated word but it’s quite simple. As previously mentioned, forward propagation is when you run information through a neural network to give you a result. Backward propagation is the same thing but done backward. You just start at the output layer and run the neural network in reverse to optimize the weights and biases.

We can create a Neural Network class in Python to train the neuron to give an accurate prediction. The class will also have other helper functions. Even though we cannot use a neural network library for a simple neural network example, we can import the numpy library to assist with the calculations.

The library comes with the following four important methods:

  • Exp-for generating the natural exponential
  • Array-for generating a matrix
  • Dot-for multiplying matrices
  • Random-for generating random numbers. Note that we’ll seed the random numbers to ensure their efficient distribution.

Sigmoid function:

This function can map any value to a value from 0 to 1. It will assist us to normalize the weighted sum of the inputs. The output of a Sigmoid function can be employed to generate its derivative. For example, if the output variable is “x”, then its derivative will be x * (1-x).

Training the model:

This is the stage where we can teach the neural network to make an accurate prediction. Every input will have a weight-either positive or negative. This implies that an input having a big number of positive weights or a big number of negative weights will influence the resulting output more. Remember that we initially began by allocating every weight to a random number.

Following steps will clearly illustrate the procedure:

  • We took the inputs from the training dataset, performed some adjustments based on their weights, and siphoned them via a method that computed the output of the ANN.
  • We computed the back-propagated error rate. In this case, it is the difference between neuron’s predicted output and the expected output of the training dataset.
  • Based on the extent of the error got, we performed some minor weight adjustments using the  Error Weighted Derivative formula.
  • We iterated this process an arbitrary number of 15,000 times (large number guarantees more accuracy). The whole training set is processed simultaneously for the iterations.