Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Mistaken Identity: why the media using humanoid robots to represent AI is bad news

Blog: Mistaken Identity: why the media using humanoid robots to represent AI is bad news

Using pictures of shiny silver robot men to represent AI has always been annoying. It also risks damaging the industry.

“Am I the problem?”

In a great article in 2018 Adam Geitgey railed against “The Real Scandal of AI: Awful Stock Photos”. He captured the naffness and predictability of “robot fingers poking at a keyboard” and the “generic 3D model of an android from 1998” used to illustrate articles about Artificial Intelligence. At the time the Best Practice AI team were sorting through thousands of articles to build our open library of AI use cases and case studies so this struck home; we had a laugh and retweeted the article enthusiastically.

This week I gave evidence to the All-Party Parliamentary Group on AI in the UK House of Lords. I was on stage with luminaries from companies like Accenture and Microsoft discussing barriers to enterprise adoption. One topic that came up repeatedly was executive education. Clearly AI and Machine Learning are not simple topics — it takes time to understand them and we are still figuring out what the economic and business impact of the new technology will be. And shiny silver robot men are typically used to illustrate articles explaining all of this.

But I am increasingly clear that the shiny robot man illustration is storing up problems for the technology and the industry.

  1. Firstly, it sets expectations too high. The difference between narrow (task-focused tool) and general (generic human-type intelligence) AI is important. The latter is still several scientific leaps away, at best — the former is all around us. However, imagining this technology as shiny silver men suggests that this technology is about the latter and will transform the world overnight. It won’t.
  2. Secondly, it mis-directs what AI can and should be used for. If thought of as something rather more mundane — “spreadsheets with attitude” is one description mooted — then it will better help non-technical folk envisage what they might use the technology for.
  3. It suggests that the main potential business benefit from AI is replacing humans through job automation. Organisations examining AI are just starting to shift their questions from “what can this technology do?” to “what is the Return on Investment (ROI) on using this technology”? And here the mental image of a humanoid robot is problematic as it drives a focus on automation and replacing humans in jobs with robots. Examples of human job replacement are, as yet, more limited than the hype might suggest. Famously, the introduction of ATMs happened in parrallel to an expansion in the number of bank clerks and something similar may be happening with customer service chatbots.
  4. This makes it hard for executives to see the real business benefits from AI. As and when machines start reading medical scans from patients at scale the business, or social, benefit will not be the potential reduction in the number of staff looking at scans. It will be the opportunity to eliminate process bottlenecks and to introduce regular pre-emptive scanning in a way that may save many lives. Scaling up a previous scarcity is of huge economic and social benefit — and not one best understood by estimating (marginal) FTE reductions.
  5. It reinforces the potentially malign image of AI — obscuring the real sources of “AI Ethics” problems. The film iRobot showed us nice shiny silver robot men becoming agents of the dark side, playing to generations of negative stories about machines turning against their makers. More urgently it becomes easier to believe that the algorithms are “racist” or “sexist” rather than that they have been fed bad human-generated data. If we mis-communicate the ethical questions around AI then fixing them will get all the harder.
  6. Finally, this will help generate an over-reaction against perceived “hype”. Some time in the next few years people will take stock of the reality of AI delivery. Self-driving cars will still be 5 years away. Shiny silver robot men will still be virtual. And they will lose interest. And for them that will be a mistake— rather like turning away from the Internet after the dot.com crash in 1990. Because AI will change the world — just not (yet) in humanoid form.

The question this inevitably therefore poses is this — what would be a better image to use? Thoughts appreciated.

Get real

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment