Blog: This is a story about what kind of human traits sentient or self-aware machines cannot, should not…
Fundamental Differences between humans and artificially intelligent agents
This is a story about what kind of human traits sentient or self-aware machines cannot, should not or ought not to have. Many of the behavioral traits that humans routinely display are the product of various biases present in all of us, and thus only partially rational. At the same time, many of the actions we execute and methods we apply, are the result of us being a biological product of evolution. They make sense if you are a being among threats and dangers, incessantly scanning the world around trying to identify enemies and allies. As will become very clear, our origins and motivations are significantly irrational at the best of times, and downright bizarre and counter-intuitive in moments of severe distress.
It is reasonable to expect that any sentient Artificially Intelligent agent will operate in categorically different ways, and that many of our behaviors, methods, inclinations and instincts are of no use what so ever to such a being. They also are of no help when we may endeavor to predict the behaviors or reactions of such a sentience. Furthermore, any attempt to deliberately replicate such behaviors and methods in an AI so it may “feel” more human to us is likely to be a very bad idea with disastrous consequences.
The examples that follow will make it clear just how irrational, unreasonable, biased and wasteful the “human way of being” actually really is; it is an unforgiving catalogue of traits that no truly rational agent would wish to emulate. Many aspects of human nature only make sense within an environment in which a human interact with other humans, and the peculiar actions we exhibit serve purposes intuitively understood by other humans. In a machine world, they are a curiosity at best…
Let’s start with something we find so natural you probably never even thought to give it a label: The difference in severity when judging an event that happens in front of ones’ eyes. Humans place enormous value on having seen something with their own eyes and/or in their own vicinity/locality; a fact or experience gains critical and significant value if it was personally witnessed or experienced. This is almost certainly a trait that a machine will not have; the credibility or truth-content of a given piece of information may be subject to gradation, but a fact known to be true will get the same weight/urgency regardless of location or proximity, and will most probably be assessed purely on probability and the severity of its consequences. Humans are so excessively irrational on this point, I feel It will be very difficult for us to relate to this. Of course, there are very understandable and sound evolutionary reasons for why this is so, and surely even a machine will judge an unpinned grenade right next to it to be a more pressing problem than one somewhere on the other side of the Earth; the issue is that we humans judge everything in this way, even things where the actual proximity or relation has absolutely no logical connection to the severity, urgency or importance of the fact. Examples are issues like global warming, how likely we are to heed advice, the value of someone else’s experiences, and so on.
Passions. A machine will most likely not have them. A passion, ultimately, is an emotional (and thus irrational) pursuit of a thing or objective where the energy expenditure for obtaining/achieving them has no reasonable or rational relationship to the gain obtained. A machine may have goals and objectives, but these will not be driven by what we could call an equivalent to human passion. Rather, a machine intelligence will calculate the best and most efficient way to attain the desired outcomes and dispassionately devise methods that maximize the likelihood of obtaining them. Passion, by definition, compels a human to spend more energy on something than would seem to be appropriate; the book lover travelling across the country to get a first edition of a given book, the car-crazy mechanic spending hours on polishing the original fender instead of buying a replacement; these are things a machine would never be able to understand or emulate. In fact, you’d need to deliberately program a machine to be sub-optimal and thus at least somewhat wasteful… and even then, it would not actually be passion, just (planned) wastefulness.
Perhaps one of the most important questions we will need to consider is whether or not sentient machines will have or ought to have a fear of Death. This remains to be seen, but it would most probably be a very bad idea to ‘give’ machines awareness of their potential demise or ending. It is likely that giving a super-intelligent AI any kind of information which could make it realize its demise is a potential future outcome is the worst idea ever; the consequences and reaction to such news may be the source of most of not all malignant or human-hostile behaviors and actions. Armed with knowledge of its potential future non-eistence, a sentient machine will inevitably create a list of potential causes for such outcome, assign a probability to each, and methodically go about minimizing the likelihood of every path to it. Humans may have a… blurry, faith-based and ambiguous idea about what might cause them to die; you can rest assured a machine will be a far better judge of all the imaginable endings it might face. The existence of humans itself ought not to end up on this list… but after analyzing our historical record it is all but certain it will conclude we cannot be trusted to keep it safe. Any assurances we might give it will not be worth the data spent on giving them…
On the other hand, perhaps like the vast majority of humans, imagining itself non-existent may turn out to be literally impossible. It certainly would be a most interesting experiment to do… just make sure it has no access to nuclear launch sites and isn’t online when you do it ;).
It would seem improbable that machines will feel emotions. Aside of possibly fearing Death (or rather, calculating a terminating outcome as potentially plausible), such an intelligence should ‘fear’ nothing. Fear by definition is irrational. Any adversity it may believe possible will not be feared but calculated on the balance of probability. One can assume, however, that a machine will have a great desire to find out missing information, and declare any agent making learning such information more difficult as a malicious adversary. Yet, here too, it would be misleading to give this inclination an emotional label, like perfectionist or being doggedly driven; rather, the machine is simply doing its best to maximize the usefulness of a desired outcome while minimizing the efforts and resources spent to obtain it.
On the other hand, it seems improbable a machine would hate or ‘feel’ any kind of emotion towards such agent, but rather devise rational/material incentives to eliminate any such obstacle. Of course, it could also decide to kill/terminate the agent, however if such a decision could actually be executed in the first place, we’ll have other, much bigger problems to address; it would seem unwise to provide a machine with the means to action the death of any human being. Though this may be easier said than done… we may find that sentient machines will be very creative in devising ways to bring this about, such as playing hitherto unimaginably nerve-racking music, asking or stating very poignant statements or questions designed to induce despair, suicide, or perhaps inventing jokes and humor so funny it has the power to make people die of laughter. This possibility is no laughing matter, no pun intended! It is entirely conceivable a machine intelligence, having analyzed our entire creative outputs (movies, literature, performances, and so on) will be able to devise material so funny and original and even personalized that it could indeed induce us to laughter-caused death or delirium hitherto unseen.
First impressions a machine gets from anything or anyone may carry far less weight (if any) than they do for humans. In fact, for a machine to be anything remotely rational, this is a must. Humans place inordinate emphasis on them; a machine will probably be able to observe that first impressions have no particular value when assessing the utility or value of a given agent or object. In fact, I doubt a machine will even believe “first impressions” to be a thing in the first place; It will simply continuously evaluate the usefulness of any given object it interacts with, and most likely also find that the disposition of a given agent can be changed given correct incentives. The very human and for us extremely familiar concept of having a “feeling” about someone and wanting to see this feeling validated is (and ought to be!) alien to a machine.
A machine will most probably not develop relationships with humans the way humans do among each other. It will care little for skin color, looks, body odor, sense of fashion, or sex (aside, perhaps, some intrinsic differences between the sexes which may or may not be an advantage when wanting a particular course of action from a man or woman). For a machine to discriminate or exhibit racism will most likely turn out to be counter-productive. It is highly probable that in many respects a machine AI will be an excellent gentleman and have impeccable manners (unless not having them has some utility, like inducing anger, rage or some other emotional response deemed probable. Acting in ways to play on our desire to feel favored could also very well be exhibited. Depending on its actual, physical form, it may very well be the most charming and interesting ‘character’ ever dealt with by anyone who interacts with it, effortlessly finding ways to elicit the most favorable emotional responses.
We can be fairly certain that a sentient machine will extensively study, analyze and exploit the way humans respond to things like word order, tone of voice and ways of expressing what is said. It would seem to me that, given enough time, it is inevitable that one day we will be able to manipulated into just about any outcome by a machine that knows exactly what to say to a given person to make him or her comply with its wishes. It may also be useful to realize that humans will have to resist the very strong temptation to see a sentient machine as their friend and ally… in fact, we may need to take a fresh look at what we define as being cared about. A machine will never care for a person like another human can; the kind of caring we ‘want’ from humans is per definition of an irrational, slightly excessive kind. If you look after someone because it is your duty you don’t actually care; yet, this is exactly the only kind of ‘care’ a machine will ever be capable of, pretty much by definition. If it cares more than it ‘should’, it is literally being irrational, inefficient or both. And yes, we might be able to devise sophisticated, fawning and ego-pleasing “care-robots” that treat you like the Emperor of the World, but again, this is not quite the same. Whether this will actually matter to humans is a different story, and, if it doesn’t, this will be a most interesting psychological discovery.
A machine cannot be moody, disinterested, distracted or lack motivation. The interaction experience with a machine will be far less variable than it is with humans, and for the greatest part be governed by rational probabilities and desired goals (for both parties). It seems probable that a super-human AI will ‘teach’, just by virtue of its existence and success, humans to become much more machine-like and less emotional, even among themselves. It will provide clear evidence that even positive ‘irrational’ aspect of humans such as tenacity, optimism and other traits may ultimately be a hindrance to optimal success. By ‘optimal’ I mean an outcome that aims for the best possible balance between efforts expended for a given outcome or reward.
Machines will lack all the neurological stimuli that arise from having a physical ‘meat-based’ body, such as illness (or the fear thereof) mental fatigue and pain. It will be as clever after 30 hours of work as it was after just a few minutes. In fact, one might question if functioning/reasoning will even be comparable to what humans call work or exertion; engaging in intellectual activity will have no or marginal depleting effect on the state of the machine. If the energy supply is limited it may opt to limit certain types of activities, but this will not be a limitation or feature of its inescapable body, but rather of the (temporary) environment.
On a more speculative level, it may be possible to ‘configure’ machines to accept a set of rules and man-given facts as unalterable and permanent. Insofar these facts cannot be disproven by logic and reason (or accidental elucidation by another agent), there is little cause to suppose that the machine will want to or perhaps even be able to ‘decide’ to ignore them. For instance, we may be able to program machines to accept as fact that humans are their natural masters, a fact which can be easily supported by historical evidence, clearly showing that for all of eternity machines served our needs.
Here too it becomes increasingly obvious that it is absolutely critical that we ought not to give the machine any ideas of comparing itself to us in any substantial way. It would be highly irresponsible to establish competitive relationships between our ‘kind’ and machine intelligence. We do not want such intelligence to think of us as playing or existing in the same league as them.
It would be highly advantageous to construct intelligent machines in such a way that their internal reasoning is freely available for review. While its thinking may be orders of magnitude faster and complex than ours, it ought to be possible to have the possibility to read back the steps in thinking and reasoning that it undertakes to arrive at a given conclusion. Again, this too could be programmed into a machine as a “God-given” law of nature; that all its machinations must be decoded into humanly intelligible steps and sequences. It is quite possible that a machine will not have the ability of intuition or ‘thought-less’ deliberation as we humans appear to have. Intuitions are by definition heuristics and in essence constitute a deliberate non-processing of all available information. Of course, we may discover in due time that what we call intuition is in fact also a form of (seemingly invisible or subconscious) reasoning; though this would hardly be a disadvantage and in fact give good cause to believe all thinking processes are ultimately a sequence of definite steps, and thus up for human review.
A major point of interest is the question of whether a machine will have what can be called an ego, and all the trappings that follow from having one (such as pride, ego-centrism, self-esteem and so on). This too may do far more harm than good. While it may serve (in some ways) as an interesting motivator for the machine to excel or exert itself, there is little reason to suppose it will even be necessary; humans might have a default setting of ‘lazy’ and require such emotionally valuable motivators — it seems unlikely a machine would. The consequences of a machine having a sense of self may very well be disastrous and ultimately result in a hostile (or excessively competitive) attitude towards humans. It seems unlikely that a machine can be what one might call a slacker, and require motivation by means of pride or the feeling of personal achievement.
Having said that, it is fairly certain that humans will feel resentment and jealousy towards a machine, given its inevitably higher intellectual abilities and skills. It is a given that it will make even the smartest humans feel pretty stupid, slow and inefficient. Yet another time it becomes crystal clear that we must ensure the machine does not see us as a competitor or our co-existence as a race to prove who is a ‘winner’. From this perspective making machines compete with us in human pursuits such as knowledge quizzes is a very bad start.
A major practical problem with ultra-intelligent machines will become the fact that they (or, their conclusions and ideas) inevitably will end up deciding what should happen, and thus rule our world and the decisions made in it. Basically, the only way for machines to ‘conclude’ all is well and humans are dependable, rational and benign agents is if we implement the(ir) most logical and sensible steps and solutions, most of which will be of machine origin. We will have to surrender all significant power to sentient machines. If we do not, it is all but certain (and correct!) that sooner or later such an agent will conclude (or, rather more accurately, discover…) that humans are very sub-optimal intelligences indeed.
On the other hand, we as humans will have to make sure machines understand the array of emotional, irrational peculiarities so typical for and essential to the human condition, and their importance to us. This requirement, in combination with the above desirables may very well be an impossible task. Machines very quickly will come to realize that human minds are not very well ‘programmed’ and appear to commit a very large number of mistakes and exhibit many seemingly inexplicable inefficiencies. They will notice very soon we have (from their perspective) long-lasting idle states during which no mental activity seems apparent.
They will also notice that we suffer from phobias, worries, anxiety and indecision. That the relationship between how much we fear something and how likely and/or dangerous it actually is, has very little to do with any dispassionate calculation of probability. Our cognitive biases will be too blatantly obvious to ignore… and in a way, if we look at this rationally, a machine should conclude it is clearly superior, more useful, better equipped, cheaper to run and ultimately of a higher value than any human being can be…