Blog: Sensor and Actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics – The Red (Team) Analysis Society
Amazon’s director of robotics stated in April 2019 that it would be “at least 10 years” before warehouses become fully automated (Rachel England, Endgadget, 2 May 2019). Meanwhile, as we detail below, the Chinese production of industrial robots has been falling continuously since September 2018 (-16.4%) to March 2019 (-14%). What do these facts imply? Why does it matter? What are robots and why are they actually also crucial to the whole field of Artificial Intelligence (AI), including deep learning? What is at stake for major stakeholders, from the GAFAM to countries such as China, in strategic, political and geopolitical terms ? These are the questions that this article tackles and answers.
Summary of previous findings
As we explored the drivers of AI (see Artificial Intelligence – Forces, Drivers and Stakes), we identified that an AI-agent or system needs a twin interface to be inserted within reality. This twin interface is made out of sensor and actuator. The sensor will sense the environment of the AI and translate it into inputs the AI can understand. At the end of the process, the actuator will achieve the opposite process and transform an output that is only intelligible to the AI into something that not only humans understand, but also that can act in the world according to the purpose of the AI-agent (see Sensor and Actuator (1): Inserting Artificial Intelligence in Reality).
Content – Published in two parts. The table of contents becomes interactive only for members.
We thus have a sequential process of insertion of AI-agents in reality.
Meanwhile, we identified that this sequential process must also consider different types of worlds, environments or “reality”: the digital or virtual and the physical or material, a vision we shall revisit in the second part of this article. Thus, the twin interface also serves as bridges between two types of worlds or reality.
Moving to practical cases, first with the IoT, we highlighted two types of bridges for our sensor and actuator: digital-digital, and digital-material. With the example of smart farming, we showed that those AI/Deep Learning (DL)-based systems that could design, create and deploy the whole AI/DL sequence, from material to digital to material, were more powerful. In these cases, the actuators are “real physical things”, such as smart sprinklers and smart tractors (see Artificial Intelligence, the Internet of Things and the Future of Agriculture: Smart Agriculture Security? (1) and (2).
In this light, any device, machine, or technology that can understand AI-outputs and then translate them into action in the material world will be a powerful asset. They will indeed allow for full adoption of AI because of direct efficiency, thus use. Incidentally, here we start actually including our last AI driver, use and usage, but we shall come back to this part later on. To come back to our twin interface and what AI-agents need, for example, 3D printing could be a very interesting actuator. Similarly robots are or should be an actuator of choice.
Outline of the article
Thus, in this article, we look, in a first part, at robots, what they are and what they should be from the perspective of our AI-agents. We then highlight some of the hurdles faced in what becomes a long march towards advanced robots. We focus on the difficult sector of consumer robots and contrast it to “simpler” types of robots, with consequence for robotics strategy. We then look at a slowdown of the global market, possibly rather a reappraisal and underline a downturn for Chinese industrial robot output. Finally, we highlight why it matters to major stakeholders – namely very large IT companies and some countries such as not only China but also Saudi Arabia and the U.A.E. – not to let related and reinforcing difficulties slow them down or even worse stop them, considering economic, political and geopolitical stakes.
In the next article, we shall turn to a way actors overcome the “robotic challenge”, in a challenging context, which is all the more important considering the stakes: human beings are being turned into actuators.
Robots and Advanced Robots
Robots are defined as:
A powered machine capable of executing a set of actions by direct human control, computer control, or both. It is composed minimally of a platform, software, and a power source.
The U.S. Army, “Robotic and Autonomous Systems Strategy“, March 2017
As they are focused on action, then robots are actuators of choice. Indeed, we next find the definition for Robotic Autonomous Systems or RAS:
This is a premium article. To access this article, you must become one of our members. Log in if you are a member. A pdf version of the the complete article (parts I and II) will be available for members with part II.
ARTICLE part I 3430 WORDS – plus bibliography
Atos, Journey 2022 report: Resolving Digital Dilemmas, 2018.
Morley, Janine, Kelly Widdicks, Mike Hazas, “Digitalisation, energy and data demand: The impact of Internet traffic on overall and peak electricity consumption”, Energy Research & Social Science,Volume 38, April 2018, Pages 128-137, DOI: 10.1016/j.erss.2018.01.018
Dupeyroux, Julien, Julien R. Serres and Stéphane Violle, “AntBot: A six-legged walking robot able to home like desert ants in outdoor environments“, Science Robotics, 13 Feb 2019:Vol. 4, Issue 27, DOI: 10.1126/scirobotics.aau0307
Saravia, Sofia, Efram Slen,and Gaurav Pendse, Nasdaq Global Information Services, “Artificial Intelligence & Robotics: Industry Report & Investment Case” Nasdaq Global Information Services, 31 January 2018.
International Federation of Robotics (IFR), Steven Wyatt’s presentation World Robotics Outlook 2019.