6 Types of Artificial Neural Networks currently being Used in Machine Learning

Artificial neural networks (ANNs) are computational models that mimic the behavior of the human nervous system.

Artificial neural networks come in a variety of shapes and sizes. These networks are based on mathematical operations and a set of parameters that must be met in order to determine the result. Let’s have a look at a few neural networks:

1. Feedforward Neural Network – Artificial Neuron:

This is one of the most basic types of ANN, as the data or input only moves in one direction. The information is sent through the input nodes and out the output nodes. The hidden layers may or may not be present in this neural network. In simple terms, it uses a classifying activation function to produce a front propagated wave with no backpropagation.

A single layer feed-forward network is shown below. The sum of the inputs and weights is calculated and supplied to the output in this step. If the output exceeds a particular threshold value (typically 0), the neuron fires with an active output (usually 1), and if it does not, the deactivated value is emitted (usually -1).

Machine Learning 1 - Nigel Quadros

Feedforward neural networks are used in computer vision and speech recognition to identify target classes that are difficult to classify. These types of Neural Networks are easy to maintain and respond well to noisy data. This paper describes how to use a Feed Forward Neural Network. The practice of overlaying two or more photos based on the edges is known as X-Ray image fusion. Here’s a visual representation of what I’m talking about.

Machine Learning 2 - Nigel Quadros

2. Radial basis function Neural Network:

The distance between a point and the center is taken into account by radial fundamental functions. RBF functions contain two layers: the inner layer combines the features with the Radial Basis Function, and the outer layer considers the output of these features while computing the same output in the following time-step, which is essentially a memory.

The diagram below depicts the distance calculated from the center to a point in the plane that is similar to a circle’s radius. Other distance measures can be used instead of the euclidean distance measure.

In order to classify the points into distinct categories, the model uses the maximum reach or the radius of the circle. If the point is within or near the radius, there is a good chance that the new point will be categorized into that class. When switching from one region to another, there may be a transition, which can be controlled by the beta function.

Machine Learning 3 - Nigel Quadros

In Power Restoration Systems, this neural network has been used. The scale and complexity of power systems has grown. Both of these factors raise the chances of a significant power outage. Power must be restored as fast and as reliably as possible after a blackout. The implementation of RBFnn in this area is described in this publication.

The following is the typical order of power restoration:

  • The restoration of power to critical consumers in the communities is the top priority. These customers provide health care and safety services to the general public, and restoring electricity to them first allows them to assist many more people.
  • Health care facilities, school boards, key municipal infrastructure, and police and fire services are all essential consumers.
    Then concentrate on key transmission lines and substations that serve a big number of users.
  • Prioritize repairs that will return the greatest number of customers to service as soon as possible.
  • After that, restore power to smaller communities, residences, and businesses.The order of the power restoration system is depicted in the diagram below.
    According to the illustration, the problem at point A on the transmission line receives top priority. With this line down, no power can be restored to any of the houses. Then, on the main distribution line that runs out of the substation, address the fault at B. This problem affects Houses 2, 3, 4, and 5. After that, repair the line at C, which affects houses 4 and 5. Finally, the service line from D to home 1 would be repaired.

Machine Learning 4 - Nigel Quadros

3. Kohonen Self Organizing Neural Network:

The goal of a Kohonen map is to feed arbitrary-dimension vectors into a discrete map of neurons. The map must be trained in order for it to organize the training data on its own.

There are either one or two dimensions to it. The position of the neuron remains fixed when training the map, but the weights vary depending on the value. The first phase of the self-organization process involves initializing each neuron value with a minimal weight and the input vector.

The ‘winning neuron’ in the second phase is the one closest to the point, and the neurons connected to it will likewise migrate towards the point, as shown in the diagram below. The euclidean distance is used to compute the distance between the point and the neurons; the neuron with the shortest distance wins. All of the points are clustered during the iterations, and each neuron represents a different type of cluster. This is the gist of the Kohonen Neural Network’s structure.

Machine Learning 5 - Nigel Quadros

Machine Learning 6 - Nigel Quadros

To recognize patterns in data, the Kohonen Neural Network is employed. Its use in medical analysis to group data into distinct groups is a good example. With excellent accuracy, the Kohonen map was able to categorize patients as having glomerular or tubular disease. Here’s a full explanation of how the euclidean distance technique is used to categorize it mathematically. A comparison of a healthy and a sick glomerular is shown in the image below.

Machine Learning 7 - Nigel Quadros

4. Recurrent Neural Network(RNN) – Long Short Term Memory:

The Recurrent Neural Network is based on the notion of preserving a layer’s output and feeding it back into the input to help forecast the layer’s outcome.

The first layer is built in the same way as a feed forward neural network, with the sum of the weights and features as the product. Once this is computed, the recurrent neural network process begins, which means that each neuron will remember some information from the previous time step from one time step to the next.

As a result, each neuron performs computations as if it were a memory cell. We must allow the neural network to operate on front propagation and remember what information it requires for later usage in this process. If the prediction is incorrect, we use the learning rate or error correction to make minor changes so that the back propagation will progressively work towards making the correct prediction. This is a representation of a basic Recurrent Neural Network.

Machine Learning 8 - Nigel Quadros

Text to voice (TTS) conversion models are one use of Recurrent Neural Networks. Deep Voice, which was developed at Baidu Artificial Intelligence Lab in California, is the subject of this research. It was inspired by the standard text-to-speech framework, but it used a neural network to replace all of the components. The text is first turned to a ‘phoneme,’ after which it is converted to voice using an audio synthesis model. Tacotron 2: Human-like speech from text conversion uses RNN as well. Below is some information about it.

Machine Learning 9 - Nigel Quadros

5. Convolutional Neural Network:

Convolutional neural networks are comparable to feed forward neural networks in that the weights and biases of the neurons are learnable. It has been used in signal and image processing, and it has surpassed OpenCV in the field of computer vision.

A ConvNet is a neural network in which the input features are taken batch-by-batch, similar to a filter. This will aid the network’s ability to recall images in segments and compute activities. The image is converted from RGB or HSI scale to Gray-scale in these calculations. Once we have this, variations in pixel value will aid in the detection of edges, allowing images to be categorised into several categories.

Machine Learning 10 - Nigel Quadros

ConvNets are used in signal processing and picture classification algorithms, among other things. Because of their accuracy in picture classification, convolutional neural networks dominate computer vision approaches. Image analysis and recognition techniques are being used to extract agriculture and meteorological information from open-source satellites like LSAT in order to forecast future growth and yield of a certain land.

Machine Learning 11 - Nigel Quadros

6. Modular Neural Network:

Modular Neural Networks are made up of a number of separate networks that each act independently and contribute to the final result. In comparison to other networks creating and performing sub-tasks, each neural network has its own set of inputs. In order to complete the tasks, these networks do not interact or communicate with one another.

A modular neural network has the advantage of breaking down a huge computational process into smaller components, reducing complexity. This decomposition reduces the amount of connections and eliminates the interaction of these networks with one another, resulting in faster processing. The processing time, on the other hand, will be determined by the number of neurons involved in the computation of the findings.

Here’s a visual representation of what I’m talking about.

Machine Learning 12 - Nigel Quadros

Modular Neural Networks (MNNs) are a fast expanding branch of artificial neural network research. This research examines the biological, psychological, physical, and computational drivers for developing MNNs. The general steps of MNN design, such as task decomposition approaches, learning schemes, and multi-module decision-making procedures, are then explained and surveyed.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.