ProjectBlog: The Future of AI

Blog: The Future of AI

What will AI look like in 2040?

It’s difficult to make predictions about the the future of AI because there are so many unknown variables. Amidst all the current fear around automation technology takeover, I would like to offer an optimistic vision of the future of AI , backed by my knowledge of various current research efforts.

My predictions can be broken up into three parts:

  1. AI hardware
  2. AI software
  3. AI’s effects on society.

Basically every conventional computer and smart device available today is based on some variation of a von Neumann architecture. Von Neumann architecture was first proposed in 1945 by the original gangster, or mathematician, John von Neumann. It divides the hardware of a computer into three main components: memory, a central processing unit (or CPU), and input/output devices (the input could be a mouse or keyboard, while the output could be a monitor.

The CPU is in charge of fetching instructions and data from memory, executing those instructions, then storing the resulting values back in memory. The great thing about the CPU is that it’s flexible. We can use it for any kind of software, from word processing to automatically blocking all videos of Vitalik rapping.

But despite its flexibility, CPUs execute operations sequentially and have to access memory every time they do, which limits data throughput. This is called a von Neumann bottleneck.

To gain higher data throughput many computers, use a graphics processing unit or GPU, Modern GPUs usually have thousands of arithmetic logic units in a processor which means they can execute thousands of operations in parallel ai algorithms like neural networks demand massive parallelism to execute millions of operations so the GPU has been the primary driver of their and successes there’s also the recently introduced TP use that Google designed that aren’t general-purpose there instead designed to do one task really well parallel matrix operations and because of that they’re able to reduce the von Neumann bottleneck even more than GPUs

Notice the trend here?

We are building computers that are increasingly parallel and require less energy. In fact, there currently exists another processor that uses much less energy and has a throughput, no TP, you can match… and it exists in our heads.

It’s the human brain. The field of neuromorphic computing currently being researched by several tech companies (like Intel) aims to emulate this natural processing architecture of ours. Neuromorphic chips are inspired by biological neural networks where logic and memory are closely integrated in the same basic device; the neurons and their connections.

The problem with current system architectures is that data and processing are physically separated and continuous signals are represented using binary states.

The solution is to use components called memristors. Memristors are like normal resistors: they regulate the flow of electricity but can also remember a particular charge. This allows them to act like synapses in the brain. They can process and store multiple, or even continuous, signal states orders of magnitudes faster and with much less power than current systems.

There are many different ideas on what to use as a memristor, from graphene nano structures to opto electronics, but eventually use single celled organisms as living memristors.

A few years ago, a team of bio engineers from Stanford announced that they created a fully functional computer from living material. They used protein to store bits and molecular motors to move protein filaments along artificial paths to eventually solve a simple mathematical problem based on the optimal path. Our computers will eventually be made up of fully organic living material that’s been engineered to replicate our biological neural network.

We will use DNA to store data. A single gram of DNA is capable of storing 215 petabytes of data, which could store every bit of data ever recorded by humans in a container the size and weight of one room. Scientists have already started storing digital data in DNA, since 2012. SAIC, a fortune 500 company, is already developing neural networks from living nerve cell tissue.

The future of computing is biological. Eventually, we’ll build a fully integrated living biological computer which functions independently of its creators and replicates on its own. The primary way we’ll use this technology is by incorporating it into a mind-body machine interface (very similar to the non-invasive device I design in a previous video called the link).

This will replace all of our other devices -including smartphones. It will be able to read and write to the brain, which will allow us to know everything, beeverywhere and do practically anything we’d like.

But before I shock you by explaining the mechanism it will use to communicate with the Internet, I need to give you a short primer on quantum mechanics.

Join me in the video below where we’ll go deeper:

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.