Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

/  Project   /  Blog: Causality

## Blog: Causality

Why do we cherish causalities more than other “coincidental” correlations?

My best explanation for this revolves around time and, in particular, the time-invariance of causality as opposed to the fickleness of other correlations. We are talking of the permanence of something like Newton’s 3rd Law (action-reaction) compared to the financial disclaimer, “past performance is not an indication of future results”.

Time invariance is hallowed because it is what makes causality useful to intelligence. Useful, because we evolve along the arrow of time and intelligence is desperately trying to find tools to adapt, survive and prosper. Causalities are the ultimate tools. They are tools that intelligence can use in its processes (simulations) to deterministically understand what happens next in the game of life.

With causality in mind, I want to segue into the topic of neural networks. In NNs, back-propagation is the process where neighbouring neurons give feedback a given neuron around what the latter could have told them to help them do their job (whatever that may be) better. What you see here is that the neighbouring neurons CAUSE each other to have different outcomes. Now, bearing in mind that we usually use NNs to represent reality and predict the future, that essentially means that the NNs are eventually trying to build causality graphs to emulate reality.

I was looking at this video (https://www.youtube.com/watch?v=A9zLKmt2nHo) of actual biological neurons. What are they doing there? Why are they building these connections? What are they expecting their neighbours to tell them? Well, they are simply encoding up all the different causalities they can find. They are representing reality in abstract constructs. Once these constructs are mature, intelligence will be able to run simulations (dream) through possibilities which retain enough consistency to be useful in real-life.

I proceeded to design and code the building blocks of an automated causality hypothesis generator which is my very specific take on a neural network. I had to be able to encode the passage of time into the system so that it becomes sensitive to time-variance of causality hypothesis and “kill” those which appear to be too variant in time.

This is how the topology of the neurons have evolved over the period of training/predicting (in my sequential setup, everything is out-of-sample):

You will see that there are three layers in this:

• The left-most “line of states” is what you could call the “senses” of the system
• The middle blob of neurons are where the hypotheses will be generated and tested
• The right-most single neuron is the master which is receiving feedback from the external environment and it then feedbacks information to its connections so that they can ascertain if their hypotheses are holding firm

The connections are of three colours:

• Red — connections to the senses (left-most layer)
• Blue — interconnections with time-wise memory amongst neurons
• Green — connections from the master neuron feeding back its raw output to other neurons

Note that this is not going to be a directed acyclic graph (DAG) as I am not enforcing strict connection directionality at all here. I think that the system should be free to find causalities in any way it can. Ultimately the system is exploring for time-invariance here which should hopefully make its findings stand the test of time :)

Hope you liked it!

Source: Artificial Intelligence on Medium

(Visited 13 times, 1 visits today)