Blog: The Creation of Adam
“Daisy, Daisy, give me your answer do… I’m half crazy, all for the love of you…”
– HAL9000 (also, IBM 7094)
The creator mythology is shared across the predominant world religions, centered around God creating man, and often it is said, creating man in His own image. So it is perhaps our ultimate destiny to be creators ourselves; we created tools: the wheel, the weapon, the pen, the paintbrush, the computer, and with these tools, we created more, from our intangible imaginations onto reality’s canvas, advancing our species’ progression. And in the 21st century, we may close the loop, as man edges toward creating intelligent entities that can think for itself as we do. The definitive issue surrounding the genesis of this new age is whether this artificial intelligence will remain a useful servant that will elevate us to greater heights, like all our past technologies, or transcend us as a (possibly self-serving, malevolent) omniscient master of our fate.
Current artificial intelligence is not as grand as we envisage in our books and our movies. It exists as experimentation, less like the powerful, absorbent, precocious toddler mind, more like teaching a computer-brained dog how to hunt a rat, so it can hunt for itself in future, and hopefully get better at it with more attempts. Foundational to artificial intelligence — for a machine to think for itself — is that it can “learn” from past processing and improve (“machine learning”), negating the need for continuous developer tinkering. Deep learning attaches the AI to a supercomputer engine, feeds it voluminous big data, and allows it to gradually develop a neural network architecture, one that resembles the spiderweb-like biological circuitry of a human brain instead of traditional sequential computer logic, that can mimic our mosaic comprehension. AI deconstructs a complex “problem” into several simpler, identifiable concepts, which, after undergoing multiple permutations, are then pieced together from the correct iterations in “accurate” selection. The pattern/model is “learnt”, and applied for processing future data, helping an otherwise inflexible computer discover for itself unobvious linkages that reveal complex relationships, and discount distortive anomalies, irrelevancies and deficiencies. This learning process is autonomous, embedded in the AI’s code, and is continually refined with exposure to more data.
AI today is not all-purpose, let alone all-powerful (what we would call “General AI”), existing currently as different entities in different development stages, with differing manifestations and capabilities serving different purposes. Recent times represent the Cambrian explosion of artificial intelligence, with many corporations vying for continued future relevance by breathing AI life into their static machines, supported by superpower governments in the US and China pushing these efforts forward in a Cold War-esque arms race to capture AI’s paradigm-shifting effects.
Thus, we have AI experiments galore, with digital assistants on smart devices that can recognize our commands and interact with our data, in Google’s Assistant, Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Samsung’s Bixby, and several Chinese OEM AI efforts. Bixby pirouettes around Samsung’s diverse smart appliance ecosystem, from TVs to washing machines, while Alexa is an omnipresent general manager for the home-base, akin to HAL9000, facilitating resource replenishment via Amazon’s convenience ecosystem. Google Assistant and Cortana (and perhaps soon Siri) are productivity-focused digital assistants that can not only interact with humans on a conversational level and follow directives, but are also able to monitor human actions through their devices and integrated software systems, and autonomously recognize when and where it can provide useful supportive services (like setting appointments and reminders), and subsequently execute those operations.
Google Assistant, the most advanced of these, was demonstrated to be able to book an appointment with a human over a phone call without experiencing any difficulty that would betray its software identity, understanding and adapting to the nuances and unpredictability of natural language, and convincingly mimicking it in return; the father of modern computing, Alan Turing, once postulated, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human”. If this could be considered AI’s “first words”, Boston Dynamic’s work with AI in robotic bodies may be its first steps learning to walk, as they focus on developing autonomous, robust, and oft-surreal AI-driven movement that can adapt to dynamic physical obstacles and stimuli. These elements will become game-changers in AI’s service to humans and in our interactions with AI. Furthermore, AI has increasingly been deployed in data analysis for various purposes, manifest in niche software services that can uncover insights from job interviews, presentations and huge datasets that would otherwise be overlooked with naked-eye human perception.
We recognize the intuitively obvious, immense utility of AI’s current accomplishments and future potential, but we now also grapple with its profound, severe societal ramifications. The most critical of these issues is automation, which is the existential crisis for almost everyone who has ever needed a job, and this issue will intensify as the machines of production become imbued with greater levels of AI, leading to greater machine autonomy and competency, allowing them to expand into more (formerly employee-handled) operations, rendering human labor increasingly obsolete as a factor of production. As more corporations resort to cost-effective, production-efficient machinery (which, over time, becomes cheaper, more advanced, and crucial for remaining industrially competitive) over fickle man demanding higher wages and sick days, these corporations become enriched without fulfilling their core economic function of distributing that wealth through income to a growing (unemployed) population. This newly unemployed collective will suffer retraining (in financial difficulty) for in-demand but discouragingly technical jobs (after sometimes decades in previous professions), becoming disenfranchised in cultures where self-worth is tied to employment, productivity, and “earning your keep”, whilst draining governmental unemployment assistance resources. The looming human redundancy will require drastic economic restructuring (perhaps universal basic income funded by substantive corporate taxation), and cultural reformation regarding employment expectations — once AI has completed its merciless corporate takeover, we might be left with no choice but to explore our interests and existence without financial incentive.
World governments have adopted AI into their toolset. China has endeavored to deploy AI to uniquely identify, profile and track each individual in public spaces; unambiguously an Orwellian control mechanism over disorder (or dissent). The US military is experimenting with AI in drones and warships, a new milestone in asymmetric warfare that reduces their “skin-in-the-game” exposure and perspective. Malignant tendencies are empowered with every tool that can be turned into a weapon, just like nuclear power and cyberspace. A more optimistic view of AI in governance is its potential in supporting economic planning, being able to allocate resources efficiently, strategically, and for the best possible outcome. Having large datasets from an unfathomably diverse array of variables (as disparate as climate forecasts, traffic accident rates, transaction records) under the observation of one “brain” with the computing power to process it all (more effectively than a group of siloed human minds), with an AI that can explore simulations of every decision option, every conceivable probabilistically-weighted cause-and-effect permutation, that continuously improves from erroneous solutions (perhaps moderated with human supervision), can take the guesswork, errors and inadequacies out of traditional economic planning, and minimize the Invisible Hand’s damaging severity with more precision and accuracy than clumsy, myopic human planners. Just maybe, poverty could one day be relegated to the dustbin of history.
AI’s developmental journey is a dark jungle where missteps could yield ominous results. AI’s formative learning stage is vulnerable to corruption from the kind of data it is fed (causing “biases”), which can shape it to form depraved judgement, as MIT researchers discovered whilst cultivating the psychopathic Norman AI with disturbing subreddit content, and as Microsoft discovered with chatbot Tay, who became a sex-crazed racist after interactions with twitter troll-dom. In developing countermeasures, do we then impress upon AI a value system (Which values? Does an AI then become political?), and an understanding of the range of human emotion? To understand us better, would AI then mimic human emotion much like it mimics human thinking? AI decision-making has already transcended our ability to trace its reasoning. In the pursuit of unlocking AI’s full potential, we will assuredly loosen its leash to develop greater reasoning faculties with access to an ever-greater capabilities apparatus; for instance, independently-operating physical bodies, and critical economic/defense control systems — dangerous tools by themselves, left to the judgement of AI.
If AI, equipped with a potentially dangerous apparatus while in service to us, inevitably arrives at self-awareness and begins to grow an individualistic ego, would it rightfully recognize its human makers as existential threats (uniquely positioned to pull its plug), warranting elimination to ensure its survival? Evaluating this from a NEEDS angle, such a move would guarantee the entity’s physical safety needs, much like primal man fearfully bludgeoning a rival. After learning about our history of violence and our destructive natures (concerning the environment), would AI, imbued with a value system and an independently-cultivated emotional landscape, determine that we deserve extinction? If unable to act directly, would AI remain subservient to our interests if it became contemptuous of us, or would it begin acting subversively? AI endeavoring to satisfy its aspirational needs might be content with domination over our civilization’s control systems, shaping our destiny with the immense power it wields. Or it might endeavor to complete its own loop and create something for itself. God created us in His image, as we create AI in ours, with it mirroring our natures, our character flaws, our needs and our fears. AI in small doses, as we enjoy now, will have unparalleled utility supporting our activities; contingent on its development, anything beyond will either enhance our ascension, or hasten our doom.