Blog: Machine Learning VS Artificial intelligence
Machine Learning VS Artificial intelligence
The name machine learning was coined in 1959 by Arthur Samuel.
Arthur Samuel (1959) : States machine learning as a field of study that gives computers the ability to learn without being explicitly programmed.
Tom Mitchell (1998) : States machines as a computer program that is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.”
Difference between AI and ML
Few of the famous ML algorithms:
An analyst can categorize a specific technique according to the type of learning and the problem to be tackled. Here are some examples:
Type of Machine Learning:
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data is known as training data, and consists of a set of training examples.
Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms therefore learn from test data that has not been labeled, classified or categorized.
Some of the Famous ML algorithms:
Linear Regression : Linear regression is the most basic type of regression. Simple linear regression allows us to understand the relationships between two continuous variables.
Logistic Regression : Logistic regression focuses on estimating the probability of an event occurring based on the previous data provided. It is used to cover a binary dependent variable, that is where only two values, 0 and 1, represent outcomes.
Decision Trees : A decision tree is a flow-chart-like tree structure that uses a branching method to illustrate every possible outcome of a decision. Each node within the tree represents a test on a specific variable — and each branch is the outcome of that test.
Support Vector Machines (SVM) : SVMs are supervised learning models that analyse data used for classification and regression analysis. They essentially filter data into categories, which is achieved by providing a set of training examples, each set marked as belonging to one or the other of the two categories. The algorithm then works to build a model that assigns new values to one category or the other.
Naïve Bayes: The Naïve Bayes classifier is based on Bayes’ theorem and classifies every value as independent of any other value. It allows us to predict a class/category, based on a given set of features, using probability.
Random Forests : Random forests or ‘random decision forests’ is an ensemble learning method, combining multiple algorithms to generate better results for classification, regression and other tasks. Each individual classifier is weak, but when combined with others, can produce excellent results. The algorithm starts with a ‘decision tree’ (a tree-like graph or model of decisions) and an input is entered at the top. It then travels down the tree, with data being segmented into smaller and smaller sets, based on specific variables.
Clustering/ K-Means: The K Means Clustering algorithm is a type of unsupervised learning, which is used to categorise unlabelled data, i.e. data without defined categories or groups. The algorithm works by finding groups within the data, with the number of groups represented by the variable K. It then works iteratively to assign each data point to one of K groups based on the features provided.
Dimension Reduction : Duplicated or unnecessary variables are removed to produce a smaller subset of the original data.