Learning forms one of the fundamental building blocks for human’s intelligence and it is no different in the field of Artificial Intelligence.

Dictionary definition of learning is “the acquisition of knowledge or skills through study, experience, or being taught.”

In the field of Artificial Intelligence, conceptually learning improves the knowledge of an AI based system by making observation of its environment.

From a formal stand point of view “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E”

For example:

Task T is to Play Chess

Performance P is Percentage of games won in a tournament

Experience E is the opportunity to play against itself or competitor

In this case if P i.e. percentage of games won in a tournament improves with experience E the system is said to be learning

There are broadly five learning processes/approaches used in this domain

1) Error Based learning :

In error-based learning the output signal from neural network is compared to a target output, consequently there is an error signal generated. Based on the error signal set (Systematic manner) of corrective adjustments are made to the model to make the output signal as closer to the target output. The above objective is achieved by minimizing a cost function, which is based on the error signal. Once the cost function reaches a steady state, the learning is completed.

Most of the supervised learning and certain reinforcement learning algorithms use the error based learning approach. Delta Learning Rule — the most used approach in Machine Learning- is based on the error-based learning.

2) Memory Based learning :

In Memory Based Learning, past experiences are stored in correctly classified input-output examples. The output can be binary classification output or multi-class classification output. When a test vector is applied, the algorithm responds by retrieving and analyzing the training data in a “local neighborhood” of the test vector. One of the main ingredients for the MBL based learning is the determining the local neighborhood criterion.

In Nearest Neighbor Rule, this criterion is the minimum distance of the test vector with existing vectors. Euclidean distance is one of the approach used to calculate the distance.

In k-Nearest Neighbor Classifier, we identify k-classified pattern, which are nearest to our test vector for some integer k and assign the test vector to most represented class in the k-classified pattern. In summary k-NN act as a averaging device or you are what your neighbors are

3) Hebbian Learning :

“Wire Together Fire Together” summary for the Hebbian learning algorithm. Formally, when two neuron on either side of synapse are activated simultaneously, then the strength of that synapse is selectively increased and vice-versa. Following the analogy to an artificial system, the weight is increased with high correlation between two sequential neurons. This simple but powerful learning approach is basis for the associative learning mechanism in the human brain and recently there is a strong physiological evidence of Hebbian learning in the brain’s region called Hippocampus — the region acting as catalyst for learning and memory in Human

4) Competitive Learning :

In Competitive Learning the output neuron, compete among themselves to become active. It is different from Hebbian mode of learning as at any point of time only One Output Neuron is active as oppose to Hebbian where simultaneously many Neurons can be active. In the simplest form of competitive learning, the output has single layer of output neurons, each of which is connected to input nodes and may include feedback connections among the output neurons i.e. lateral inhibitors

Principles of Competitive Neuron architecture

a. All neurons are same except the weights

b. Weights are capped at certain limit

c. Mechanism among output neuron to compete to get themselves activated , i.e. winner-takes-it-all

This is the feature, which is useful to discover statistically salient features i.e. feature detectors for particular set of input patterns

5) Boltzman Learning :

This is stochastic learning processes having recurrent structure and was the early algorithm used in the optimization problems. Neurons work on binary states 1 or -1 with no self-feedback. Neurons are partitioned in two functional groups hidden and visible. Visible layer interfaces with the environment and hidden operates freely. Two modes of operations are, clamped conditions — visible neuron clamped onto certain state and free running conditions — where visible and hidden neurons are allowed to operate freely. Boltzman learning is averaging the correlations obtained in clamped condition and free running condition over all possible states when it is at equilibrium

Each of the topic is a subject in itself and I tried to give a overview of the various learning paradigm we used in present Artificial Intelligence Systems.

Source: Artificial Intelligence on Medium