ProjectBlog: Algorithms and Superintelligence

Blog: Algorithms and Superintelligence

“The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.” Henry A. Kissinger

Do we needing to find a guiding philosophy for tech?

I think we do, but but as I was starting my investigation into Artificial Intelligence, I first needed to discover how a computer ‘learns’ and how that impacts us, because a computer can only ‘learn’ with data we have given it.

Beyond any marketing hype, the current development around an AI is focused on ‘machine learning’ and computer algorithms. An algorithm is the same as an instruction (or sets of instructions) and it works in a similar way to how you learnt to make a paper plane as a child: remember the fun you had with that! You started with a sheet of paper (the input, or data), folded it up in a specific way. The plane you created from a sheet of paper was the output (or response), and by flying it you achieved an outcome (or application).

It takes time for a computer to learn, and the learning of a child making a plane can reflect the processes of learning that a computer undergoes. Supervised learning can be compared to showing a child how to make their paper plane, by working together to create a masterpiece of flight! Of course, a child will want to fold their own plane: you offer feedback and encouragement (reinforcement learning) to make trial runs to find the best way to make if fly further. Ultimately, through unsupervised learning, your child will continue to make paper planes, and through this experience, will spot different patterns and encounter different ways to improve its flight, even predict how far the plane will fly by understanding the features that statistically reduce the uncertainty of a badly constructed plane, such as different types or weights of paper, number of folds, types of folds, on the length of its flight. Your intuition, experience and feelings create the data and these self-imposed rules have logical consequences: in other words, the way we obtain our data is just as important as the data itself!

Of course, machine learning in an unsupervised way, with no human input is more complicated, requiring more raw data and you don’t manually choose the features that create the output. If we can understand why a model makes certain decisions, deep learning can be used to analyse lots more data much more quickly: so much data that it can find patterns that humans do not see. Judea Pearl in his Book of Why suggests that data does not understand causes and effects, such as we do. So, in order for us to trust AI, we need to understand the processes behind it and how AI will change society for the better.

So what do the experts think? I recently interviewed Professor Marcel van Gerven from the Donders Institute at Radboud University in The Netherlands. His research is focused on brain inspired computing and he was keen to explain how deep learning, and how the reconstruction of images we ‘see’ will give insights into our brains and into the processes of thinking — how we encode images: not our beliefs, desires or intentions: a reconstruction into the ongoing flow of thought.

Together with a consortium of researchers, he is working on restoring ‘vision’ in the brain, by using a corticol implant that generates patterns (flashes of light called phosphenes) — such as the flashes you may see when you close your eyes and carefully apply light pressure. This, and linked research, is at an early stage, but could in the future offer benefits to society in general. Marcel also shared his thoughts around algorithmic biases and the use of algorithmic audits and discusses the differences between Weak AI (focusing on a narrow field, or problem) and Strong AI (also referred to as Artificial General Intelligence) where a machine can apply it’s intelligence to any problem, and demonstrate consciousness. But we are not there yet! In my next interview with Professor Tom Heskes we will hear more about explainability and how learning algorithms can be de-biased.

My thanks to the inspirational Mark Farmer, University of Worcester for his wonderful explanation of outputs and outcomes by using paper planes and the magic of flight!

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.