Jennifer Sukis is a Design Principal for AI and Machine Learning at IBM, based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.


In light of all evidence that human biases towards different cultures, beliefs, and each other are alive and well in society today, I’ve been thinking about the consequences of this in relationship to another topic dominating headlines — the race to create artificial intelligence that will learn from our knowledge and behaviors.

While the tone of headlines trends towards AI being an ominous threat, there’s a different scenario we should be considering:

When AI comes face-to-face with humans, what values will it learn from it’s interactions with us?

We glimpsed an answer to this question back in 2015 when Boston Dynamics released their videos of Spot, an autonomous robot that looked more like a dog than a machine. As Spot gingerly walked through the company’s cubicles, a man kicked it in the stomach without provocation. If you’re like me, this was confusing and hard to watch this little guy being “hurt”. Testing hardware is a motivation guess. Attention seeking and peer approval is another.

If you see this as Boston Dynamics doing their job to build stable robots, imagine what would happen if you left a Pepper robot alone on the streets today. How long do you think it would be before he/she/it was vandalized, harassed, or pushed into oncoming traffic? Or taught to violate human rights on it’s own like Microsoft’s bot, Tay? (You didn’t think you weren’t getting out of this article without a Tay mention, did you?)

What does it matter? Machines don’t have feelings.

Yes, we know better than to anthropomorphize robots and turn rocks into pets, but we do it anyway. Although our tendencies to give inanimate objects feelings is often criticized as a tool for manipulating people to blindly trust the creator’s intentions, there’s another side to this discussion that must come into play.

Now that we’re building machines that learn through experience, we have to consider what we’re unintentionally teaching them in the form of openly available documents, videos, images, sounds, actions, and interactions — all loaded with our subconscious instruction manual for innate human biases.

Out of all this learning will be the natural emergence of basic AI drives — goals or motivations that most artificial intelligences will have or converge to. Think of it like an AI’s understanding of basic good and evil — as children, we are not explicitly taught every common social code. We learn a little through what our parents and teachers tell us, a little through experience, and then make inferences for the rest of the “rules”.

AIs will do the same: rather than hard coding ethics, we’ll give them the basics, then it’s up to their neural networks to interpret and learn from new situations. Whatever these basic AI drives turn out to be, they’ll being determined by what it’s is learning from watching our behaviors.

If we ever succeed in our mission to create AGI — and believe me, that’s a big if — then we need to stop defining success on this mission as a recreation of ourselves. We don’t want to recreate and magnify our own shortcomings. We want to create something that represents the best in us.

If we stop measuring machines against the Turing Test and start asking, “how do we give AI the chance to become something more morally reliable than us,” just as we do with our own children, maybe we can prevent it from learning the dangerous behaviors we can’t seem to unlearn ourselves.

Whether you’re worried for the robots or worried for humans, one thing is certain: protecting them is the same as protecting ourselves. We’ve had some success at protecting people by declaring their rights. Some companies, like mine, already have a POV on ethical AI.

Perhaps it’s time we start expanding those rights to protect any intelligent entities we encounter — or create.

Given where our vision for AI is heading, defining how this new intelligence will be treated by humans should take priority over our fears of how it’s going to treat us.


Jennifer Sukis is a Design Principal for AI and Machine Learning at IBM, based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

Source: Artificial Intelligence on Medium