The scale and scope of new problems that require careful consideration increases linearly with society’s net gain in Artificial Intelligence (AI) systems (though, not exclusively). For example, as self-driving vehicles replace legacy transportation systems, a thorough accounting of all changes required to fully implement the new system could be created. Much like investors reading financial statements, interested and affected parties could see all projected consequences, both good and bad, from AI adoption.
Even when the system of question has a much smaller societal footprint, the affinity of the system to a tabular representation does not change. Incidentally, in the medical field, similar accounting could be prepared examining the effects of replacing surgeons with mechanical ‘hands’ which perform a surgery while helicoptered by a small drone. It follows that for any change from a legacy system to an AI system we can expect a linear increase in the number of material concerns that must be addressed by public conscious. Without going into a lengthy argument, I hope the reader will be satisfied by a claim to self evidence in so far as I purport that firstly, such a list is material. And, secondly, that reducing the items on this list is objectively good.
The purpose of this document is to offer an approach to redress the negative effects of technological advancement, generally, by describing instructions that reduce the rate at which AI systems can be adopted. In order to reduce the expected footprint resulting from AI adoption, we could, firstly, consider regulations that have an automatic trigger removing them from existence once certain societal conditions are met. Thus, the raw number of legacy systems replaced by new AI systems can be curtailed in line with the resources available to mitigate any negative effects of the change.
In particular, by requiring chip manufactures to limit the registers available for artificially intelligent agents to access immutable memory to 5000, initially, we curtail the number of AI systems that would be good candidates to replace legacy systems. In order to understand how this will accomplish the stated goals, we have to first consider how human intelligence works. As I will show, computers and humans share a common mechanism essential to formulate solutions to real world problems. Thus, if restraining human efficacy in this regard is feasible a parallel with respect to computers can be expected.
How the Human Mind Works
Unlike most, I believe, using Stephen Pinkers linguistic analysis inWords and Rules,and, by extension, a sense of how the mind works; a computer can and does the same two rules he suggests. Namely,
- Morphology — Store Words
- Syntactic Rules — Construct sentences, and by extension, thoughts
Computers mimic Human Morphology
At the most basic value, that is just before physics, a computer usesAssembly Languageto mapBytes/8 Bitsto set registers. For example, in a simple, 16KB machine, there could be specific registers for 256 characters, numerals, and symbols. A simple example, R64 could map to Register 64 which holds x1000 0000*. This is different than the remaining memory which could be malleable, such as a for the Heap, or designated for inputs (screen, keyboards, etc.).
Computers only move memory, often Recursively
Pinker makes the case that language uses finite media, namely words, to form an infinite number of thoughts in a combinatorial, recursive language. Computers move memory, simply jump, loop, etc, i.e. mimic these exact rules of syntax. Scheme, a highest level computer science language, like C++, but only uses recursion rather than for loops, is a perfect map to Pinker’s argument. The question here is the size of the media that a computer can store in specific registers, not otherwise on the Stack, Heap, etc, (malleable memory). Quantum Computers, available in the next decade, will have many more registers for specific media; plus I imagine the same rules, if not more, than existing (I am vaguely familiar with how they work, but I know that it is dissimilar than existing machines).
Nation States, a race to pure AI
From what I have read, though I am far from an expert, nation states such as the US and China, allegedly, are in race to AI; similar to the USSR and US cold war. The additional pressure from nation states is one of the reasons many though leaders like Elon Musk, Bill Gates, and other tech leaders, seem adamant that it will arrive. Others, such as Peter Thiel, are less convinced.
Policy that limits the registers to 5,000
Pure AI could replace far more legacy systems than the change from fossil fuels to renewable energy, for example. This staggering degree of change should highlight my suggestion to restrict the change in the number AI agents commensurate to a level tolerable among even the most affected citizens. More to the point, we can ask manufactures of PC chips to allows no more than 5000 registers to be made available to intelligent agents. The average human chooses words from finite set of the analogue of registers within the human mind. More succinctly, Pinker argues that we have 10,000 of these ‘registers’ for words. Ergo, by restricting computers, at first, to half as many ‘registers’ as the average human mind, we likewise restrict AI’s efficacy to a level comparable to an average human operating within a legacy systems. Thus, the advantage of AI does not appear so obvious.
And, as we meet the challenges of the initial change from legacy to AI systems we can continue to increase the number of registers available in PC’s for intelligent agents. In doing so, we can reverse the negative perception among many in the west toward technological changes. As an example,Italians recently awarded with a majority to parliament a party that has a governing platform consisting of five points — where one is ‘no growth’. I hope for self evident reasons that the reader acknowledges that in the case of AI, ‘no growth’ is not only not an option but also it is not a way of living characteristic of Westerners.