Blog

ProjectBlog: The Million-Dollar Neural Network, Part I: Understanding the Biological Basis

Blog: The Million-Dollar Neural Network, Part I: Understanding the Biological Basis


Learn How to Build a Neural Network & Enter to Win the $1.65M CMS AI Health Outcomes Challenge In This 3-Part Series

What if I told you that you could learn to use machine learning — more specifically, neural networks — to tackle some of the biggest problems in healthcare?

Some of you might be interested. Others, not so much.

But now, what if I told you that, in doing so, there was an opportunity for you to WIN ONE MILLION DOLLARS while contributing to the good of humanity?

Would that grab your attention?

If so, here’s your big chance.

Do I have your attention? Good.

The Center for Medicare & Medicaid Services recently announced their plans to distribute up to $1.65 Million USD to encourage the development of real-world applications for A.I. in healthcare.

The challenge, dubbed the Artificial Intelligence Health Outcomes Challenge, is just as it sounds, calling for submissions using artificial intelligence (or machine learning, more specifically) to predict health outcomes:

The Centers for Medicare & Medicaid Services’ (CMS’) Center for Medicare and Medicaid Innovation (Innovation Center) is launching the Artificial Intelligence (AI) Health Outcomes Challenge, in collaboration with the American Academy of Family Physicians and the Laura and John Arnold Foundation. The CMS AI Health Outcomes Challenge will distribute up to $1.65 million to encourage further progress in AI for health and health care and to accelerate development of real-world applications for this technology.

Participants will analyze large health care data sets and develop proposals, AI driven models, and frameworks that accurately predict unplanned hospital and SNF admissions and adverse events.

This comes at an amazing time where it’s now easier to build a neural network than ever before, with Keras now serving as the high-level API for TensorFlow 2.0 (don’t worry — you’ll know exactly what that means by the end of this series, if you don’t already).

Now you may be saying, “well that’s great, but I’ve never coded before, let alone built a neural network.”

Well thankfully, machine learning isn’t really about being a great coder.

*GASP* “Blasphemy!” you may say.

“You stink. You smell like beef and cheese! You don’t smell like a Data Scientist.”

Now this is just my personal take, so take it with a grain of salt — but machine learning is more about understanding the intuition behind the algorithms and learning how machines learn than it is about coding.

Granted, you’ll need to have a firm grasp of Python*, of course. But what makes a data scientist stand out is an exceptional understanding of the underlying concepts rather than exceptional coding prowess.

*And SQL for data engineering, dealing with databases, etc. where the majority of your time will actually be spent. Unfortunately, this cool machine learning stuff is a pretty small percentage of what most data scientists actually do on a day-to-day basis

From Forbes, “Cleaning Big Data: Most Time-Consuming, Least Enjoyable Data Science Task, Survey Says”

That’s why, here in Parts 1 and 2 of this tutorial, we’ll be spending a lot of time focusing in on concepts (biological basis in part 1; machine context in part 2) before getting into actually building our neural net in Part 3.

The Biological Basis for Neural Networks

Anatomy of the Neuron

Our brains are made up of neurons. Without getting unnecessarily complicated, neurons receive a host of inputs from other neurons via the dendrites (those little guys in the picture that look like hair).

When these inputs sum to a sufficient threshold in the soma (the cell body, or the “head” to the dendrite “hair), an action potential is triggered, meaning that the signal or “message” is transmitted down the neuron’s axon (the long, skinny “tail”), beginning at the axon hillock (basically, the start of the axon) all the way down to the axon terminals (the weird finger-like things branching off there at the end).

The myelin sheath around the axon essentially serves as an insulator for the wire that is the axon, allowing the signal to be transmitted quickly and efficiently.

The axon terminals connect (kind of — they don’t actually touch) to the dendrites of the next neuron and transmit their signal across the synapse (the little gap between the axon terminals of one neuron and the dendrites of another) via chemical substances known as neurotransmitters (a whole other bucket of worms we don’t really need to get into for our purposes here).

The neuron before the synapse and the neuron after are referred to as pre- and post-synaptic, respectively.

Finally, for once, my undergrad neuroscience major is coming in handy! Image Source: Khan Academy

Super oversimplification here (sorry neuroscientists), but let’s do it: Let’s say one neuron receives inputs from your eyes, and transmits a signal down it’s axon, and delivers an output to the muscles that control your eyelids.

If the signal is sufficiently strong (i.e. lots of sunlight), an action potential is triggered, and the signal is sent down to the muscles that control your eyelids (i.e. close your damn eyes so you don’t go blind).

Action Potentials

Let’s go through an example here to give you an idea of how these neurons transmit signals to one another:

Let’s imagine each one of these little dendrites receives an input to which we’ll assign a made-up value of -5 to +5. The threshold for the soma to transmit the message down its axon, in this imaginary case, is +10.

Say we get three inputs of +5, two of -3, and three of 0. (3*5) + (2*(-3))+(3*0) = +9, so no action potential would be triggered. Now, say we get two inputs of +5, two of -3, and three of +2. (2*5) + (2*(-3))+(3*2) = +10, so we would get an action potential.

If this total is 10, 20, or 100, it doesn’t really matter. As long as it meets or exceeds the threshold, an action potential of the same intensity will be triggered. If it does NOT meet or exceed the threshold, nothing will happen — no minor or partial signal. This is what’s known as the “all or none” principle.

Let’s Summarize

Neurons receive inputs via the dendrites. These inputs are then summed in the soma (cell body). If the collective inputs meet or exceed the specific threshold for that neuron, an action potential is triggered (recall the “all or none” principle).

The signal then travels down the axon, starting at the axon hillock, all the way down to the axon terminals, where it transitions to a chemical signal — i.e. neurotransmitters crossing the synapse, from the presynaptic neuron to the postsynaptic.

Ok, That’s Great. But Why is The Hell is This Important, And What’s Next?

If you’re going to build an artificial neural network, you need a firm grasp of the biological systems upon which their based.

Understanding action potentials in biological neurons will help you understand why things like activation functions are so important for artificial ones.

But now that you understand the basics of the biological neuron, we can finally ask ourselves, “how does this translate into a machine context?”

For that answer, stay tuned for Part 2 where we’ll look how artificial neural networks learn — diving deep on the topics of backpropagation and gradient descent.

Then in Part 3, we’ll synthesize all these lessons and use them to build ourselves our first neural network using TensorFlow!

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social