a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Cognitive Biases in a Reverse-Engineered Brain

Blog: Cognitive Biases in a Reverse-Engineered Brain


Go to the profile of Kevin Ann

Technological trends indicate that we may have the capacity to reverse-engineer the human brain sometime before 2030. The informational challenges are extraordinary for the modeling, simulation, or ‘simple’ blind one-to-one emulation of 100 billion neurons with an average of 1,000 synapses each, as well as supporting structures such as glial cells. Whether this may be possible even in theory (I believe it is) or imminent within the next couple of decades (perhaps, I am optimistic) is the topic of another post, or thousands of posts, from people much more qualified than me in neuroscience or computer science to consider.

We may try to understand better the human brain for the following reasons:

  • The widely-acknowledged and mainstream aims of science for knowledge as an ends in itself, as a tool to treat mental illness, or perhaps to enhance or augment brain functioning.
  • The radical transhumanist aim to upload the mind or to create a general artificial intelligence.
  • For thought experiments in philosophy.

In all these cases, cognitive biases resulting from our evolutionary history must be taken into account. Cognitive biases act to disrupt our thinking and behavior away from the way we’d act with perfect information and perfect rationality in order to maximize the utility functions related to survival.

As is widely known, evolution is not a maximal and absolute process, but rather an optimal and relative one. In other words, the absolutely best behaviors or physical structures of an animal may not be the ones that ultimately win out, but rather the best ones relative to others in some time-dependent context. For example, let’s consider the evolutionary costs of making a Type 1 Error (false positive) versus a Type 2 Error (false negative). Imagine you are in the wilderness and you think you hear or see something.

  • Type 1 Error — False PositiveIf you make the cognitive mistake of thinking there is danger and therefore run away when there was in fact no danger from a predator, then you are not penalized for this mistake.
  • Type 2 Error — False NegativeIf you make the cognitive mistake of thinking that there is NOT something there and therefore do NOT run away when there is in fact a predator, your genes are removed from the gene pool at that point (or at least they don’t continue on to reproduce if you have children already).

Repeat this process over millions of generations and you end up with the cognitive biases that we continue to operate with today. Our brains and thought processes with these cognitive biases are not ideal of course, but this end point was the best relative to the rest in optimizing utility functions unique to our evolutionary history.

The following questions are important to address, depending on whether we want to try to understand and simulate a human mind from higher-order principles or just ‘simply’ try to do a tech-enabled one-to-one emulation:

  • How do we account for cognitive biases?
    In particular, how do we account for the relative weighting of the human brain’s tendency to commit errors of the Type 1 class versus errors of the Type 2 class? The utility of these errors are of course much different when you have a simulated or emulated brain that does not face the same evolutionary environment that the biological brain developed in.
  • Would an uploaded human brain still be considered human(-like) if cognitive biases were isolated and removed?
    What makes us human aren’t just our physical forms and tendencies, for example, to be compassionate or violate, but also our tendencies to make mistakes and to think operating under various cognitive biases. Remove these cognitive and do you really have the same thing?

Obviously, no clear answers, even in principle since these questions may not have an answer or are ill-defined.

It appears that requiring a simulated or emulated human brain to keep all its negative evolutionary baggage is akin to trying to prevent progress itself. In general, it seems like the ‘hard,’ scientific, engineering, and technological issues (even in the wildest transhumanist imaginations) are much easier to grasp and resolve relative to the ‘soft,’ philosophical, legal, and moral issues that we will be forced to confront. It seems that every question leads to even more questions, instead of answers.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter