a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Simplicity in the complexity

Blog: Simplicity in the complexity


Demystify Neural Network to a non-technical audience

Go to the profile of Aileen

Working as a data scientist, I have heard about two extreme interpretations of AI. On the one end, AI is all about using the commands ‘.fit’ plus ‘.predict’ with provided tools and thus we can do AI by simply importing libraries. On the other extreme, AI is a magical task hardly understandable to people from other domains.

While it might be convenient for a data scientist to wear a seemingly fancy title that suggests magical work, in reality, there is nothing so fancy about it.

If we are able to explain the fundamental ideas behind AI, demystifying it in a way that anyone could intuitively understand, then we can help bring up the speed for the organizations to adopt AI, just like adopting any other technologies that have changed the way we operate in business in the last century.

This got me thinking, how can I explain AI to a non-technical audience? One question that got asked often was one of the hottest algorithms — neural network, a method that has regained popularity in recent years and forms the foundation for deep learning.

So what is neural network in a nutshell? Let’s break down the term into “neuron” and “network”.

To interpret “neuron”, we can think of it as a black box that maps A to B. Or more specifically, it is a mathematical function, which takes input A and gives output B.

Say, we want to predict the house price B. In the simplest case, we assume the house price is linearly dependent on the size of the house A multiplied by a constant C, plus some fixed legal fees L involved in purchasing that house. To put it into a formula, we have a hypothesis B = C * A + L that represents the ground truth for the ‘ingredients’ of a house price.

The most essential task here is to determine the constant C and the constant L. Once we have such a formula in the black box, given the size of any house, we can predict its price.

As in nearly all cases, we do not have enough knowledge to uncover the true relationship between the input and the output, in this case, the house size and the house price. Then how do we go about finding the C and L?

However, what we do have is a bunch of data, each data point represents a specific house size and the respective price. It is through these data that we hope to learn a model that is representative enough of the reality as we hypothesized it.

In other words, for each problem, there is a “right neuron” that can solve the problem.

This was an example of a simplest “neuron”, where the relationship between the input and the output is linear. In fact, in most practices, such neurons also include an additional nonlinear transformation that helps to capture the more complicated relationship.

The idea of finding the “right neuron” is indeed a longstanding way to problem-solving in many domains of science.

Given a problem, we want to model the relationships between the input and output, in an attempt to fully characterize a system. We write down some formulas to abstract the problem in a solvable fashion.

Often, many practical problems are too complicated to approach in such a way. Take the example of the house price. In reality, the price depends not just on the size of the house, but also on many other factors such as the location, the facilities, and the economy. It’s thus infeasible for even the most knowledgeable expert to provide a fair formula that would take in all relevant factors and calculate the corresponding house price based on those factors.

If a single neuron cannot handle the complexity, will multiple neurons do the job?

That’s where the notion of “network” comes in.

A neural network consists of multiple neurons, each takes some input and gives some output. The output of a neuron can become the input of some other neurons. With multiple neurons, a neuron network can perform more complicated computation while keeping each neuron relatively “simple”.

Now, the complexity is being partly shifted from the structure within a neuron to the structure of the network. With such a network, the model is capable of representing a more complicated phenomenon.

In essence, a neural network paints a beautiful picture of simplicity in the complexity — “simple” neurons embedded in a complicated network.

Source: Artificial Intelligence on Medium

(Visited 2 times, 1 visits today)
Post a Comment

Newsletter