Are you fascinated by the remarkable artificial intelligence technology that has exploded in popularity in recent decades ?
You will learn the following things throughout this article:
- What exactly are neural networks?
- how do they work?
- What are some examples of real-world applications of neural networks?
- What is the best way to begin learning neural networks?
What exactly are neural networks ?
Before we move on to neural networks, let's take a closer look at neuron because it is crucial to understand this big boy.
Ever wondered how our brain is equipped to learn so many new things and be smart enough to perform and make the right decisions? You are able to identify objects around you instantly, identify different sounds, and recognize people you know, how is that possible?
Our brain has three main sections: hindbrain, midbrain, and forebrain. For us the key part is the forebrain where all this magical stuff happens. It is composed of densely packed layers of neurons, whose branches are interconnected. This is where all of the processing takes place after we receive the information from all of our senses, which is in the form of an electrical signal. Human brains contain about 86 billion neurons, each connected to 10,000 or more others, and have the capacity to store up to 1000 terabytes of data.
These neurons are nerve cells which transmit and process the information received from our senses. By connecting with one another, they form a large network. Dendrites receive signals from the previous neurons, then they send them to the soma. Here, this electrical impulse is processed. Axon then transmits the signal from Soma to the synapse. In turn, the synapse then passes the signal to the dendrites of the other neurons to which they are connected. Hence a complex connection is formed in the brain.
Artificial neural networks (ANNs) try to mimic the function of a human brain by processing real-life data via computer processors. A neural network is composed of layers of interconnected neurons that receive a set of inputs and weigh them. Depending on the inputs, either integers or floats may be used. Neurons are simply mathematical functions developed by engineers and scientists over the years. As a result of some mathematical operations, they output a set of activations, just like the synapse in brain neurons.
I hope you now know what a neural network is and have a basic comprehension of it. You also learn about our wonderful brain, which is capable of doing everything on its own. Now, let’s have a look at how these neural networks work. Keep reading because it’s fascinating.
How does a neural network works ?
In the image above, you can see a single layer neural network being fed input by the input layers. Once the inputs have been passed on to a hidden layer of nodes, they are then passed on to an output layer which generates our final output ลท. Let’s talk about each of these layers.
Input layer: The input layer, which includes passive neurons, is the primary layer of a neural network. They are responsible for feeding preliminary data into the system for processing. They are considered passive because they do not execute any mathematical calculations on our data. The input layer is a vector of our data with numerical values X1, X2, X3. This vector is then input into our hidden layers, which we’ll discuss next.
For a better understanding of the input layer, see the image below
As seen in the preceding image, we have our input data set, which will be fed to the input layer as a vector of all numerical values from m1 to m9. Let's talk about the hidden layer.
Hidden layer: This is the core layer of our neural network, where all of the critical mathematical stuff takes place. I’ll attempt to explain as simply as possible. A neural network can have numerous hidden layers, but in our situation, we are only considering one. Every neuron or node in these layers runs a mathematical operation, assigning weights and biases to our input values. Let us first understand weight and bias.
Weight: Weight allows a neuron to focus on or prioritise an input feature. Let’s look at an example to help you understand. Assume you wish to identify a fish species; the input characteristics could include fish size, colour, wing size, fin shape, tail fin shape, and so on.
After conducting your analysis, you discovered that the shape of the fin is a critical feature in identifying the type of fish. Of course, you now want to emphasize this input feature over all others. To accomplish this, we assign a weight and multiply it to the input value we receive from our input layer.
Remember that the value assigned to weight is only the initial value, it will be modified as our neural network learns. Weights will be allocated to all input features, with one we wish to emphasise more will be receiving a higher weight value. This assists us in maintaining the focus or relevance of the most important input elements in our data, which will eventually have a greater influence on the neural network’s output. Let’s first understand the activation function before moving on to bias, as it will make more sense once we do.
Comments
Post a Comment