An Introduction to neural networks
The neural networks are a part of machine learning, inspired in the human brain functioning. One neural network is composed by a group of interconnected neurons between them through links.
The union of all these interconnected neurons constitute an artificial neural network.
Each neuron takes the inputs of the neurons of the ancestor layers as inputs, each of these inputs is multiplied by a weight, partial results are added and through an activation function the output is measured. This output is, at the same time, the input of the neuron which predicts.
These neural networks are no other thing than massively interconnected networks in parallel with simple elements and with hierarchical organisation, which try to interact with objects of the real world in the same way does the biological nervous system.
Is pointed that a neural network is a group of interconnected neurons working together, but in order to understand their functioning is very recommendable having clear all the concretes associates to each neuron.
Elements that compose a neuron
Entrance group x1,…xn
Represent the inputs of the neural network.
Synaptic Weights w1,…wn
Each input has a weight that goes automatically adjusted as the neural network learns
Add Function, Σ
Makes all the summation from all the outputs weighted by their weight.
Activation Function, F
Oversees maintain the group of outputs values in certain ranges, usually (0,1) or (-1,1)
It exists different activation functions that meet this objective, the most common one in the Sigmoide function.
Output, Y
Represent the resultant value after going thought the neural network.
Classification of neuronal networks according to their typology
The neural networks are classified by topology, depending on the characteristics they have:
- Perceptron monolayer: Are the easiest, as they only have on input layer and one output layer.
- Use: Represent the simple polynomial functions.
- Perceptron multilayer: They have hidden layers, in which the system in capable of represent nonlinear functions. As the neural network goes learning, the layer itself is capable of delete all the links considered irrelevant.
- Use: Represent the polynonymous complex functions.
- Recurrent neural network (RNN): They don’t have a structure with defined layers, but allowed arbitraries connections between the neurons, including creating cycles.
- Use: Recognition and text generation.
- Convolutional neural network (CNN): These are similar to the multilayer neural network, their main advantage is to do a task, reducing significantly the number of hidden layers and doing a faster training.
- Use: Recognition and image, video and audio generation.
- Radil based networks (RBN): These are networks that calculate the output function in function of the distance to a point called center. The output is linear combination of the radial activation functions use for the individual neurons.
- Use: Analysis of the temporary series, image process, automatic speaking recognition and medical diagnosis.
Written by: Diego Calvo, part of Idiwork’s team