Blog: A New AI-Powered Jail
For centuries, jail has consisted in the restriction of somebody’s movements and actions. People who were found guilty of a crime, if they were not sentenced to death, used to be confined in a restricted space where they could not repeat the crime or profit from its outcome.
This state of fact may change in the future. As Artificial Intelligence permeates our society and enters our decision-making process, confinement might become more subtle and less evident.
Recommendation systems and search engines today have the power to create bubbles around us, building on our tastes to direct how we act or what we know. They do it by collecting data of our beliefs, preferences, and actions and processing them by means of predictive algorithms to suggest the course of action most suited to our tastes. In doing so, they play a major role in forming our very own tastes, and shaping our very own emotions, and hence our future decisions.
What if this mechanism was exploited to create a jail powered by Artificial Intelligence?
In essence, it would work like this: people “in jail” would be subject to all sorts of behavioural data collection and monitoring. Data would be collected on how they move, what news they read, what people they associate with, what ideas they communicate, what physical activity they practice, and so on. This data would all be centralised and administered by an AI-jailer algorithm, which would have the job of forging future course of actions for the prisoners in line with the restricted, confined, harmless behaviours which are envisioned as “admissible” for them. In this way, prisoners would be able to enjoy a fairly “normal” life, and even share space with those “not in jail”, in an almost seamless fashion.
For instance, suppose that, in an imaginary future, a society decides that one type of jail sentence would be the inability to vote in general elections. Then, the AI-jailer algorithm would monitor them and induce a disinterest in voting for democratic polls, maybe directing them to spend their day at the seaside instead of queuing up for the ballots. The prisoners would not even know, maybe, that they have been denied the right to vote, because they would feel as if they “spontaneously” decided not to “exercise their right”.
Or, as a different example, suppose that another future society decides that another type of jail sentence would be to live in a specific place or city, or nation. Then, the AI-jailer algorithm could make them feel that this place or city, or nation is the best place in the world for them, the place where they can achieve all of their goals in life and be happy and fulfilled. It would induce new friendships with other prisoners, so that nobody would feel lonely. And it would discourage any attempt to leave, by suggesting something interesting or a reason to stay, even to the point of creating, for instance, a career progression for a prisoner who wanted to leave because of career reasons, or instilling fear of the unknown or threat to personal security in the “world outside” (à la Truman Show).
Such a kind of AI-powered jail would be less violent and more respectful of life conditions of the prisoners, while still severely restricting their freedom. To some, it may even be considered a gentler approach than how prisons and sentences work today, by means of physical confinement.
However, I believe that AI Jails could work only if certain conditions are met:
1. Prisoners should not be aware they are in jail
2. Prisoners should not resist relinquishing all their relevant behavioural data
3. Prisoners should listen to, and keep in esteem, the recommendations received from the AI-jailer algorithm
Condition (1) is, in my opinion, the most sensitive and evokes numerous ethical and humanitarian issues.
This is, of course, a purely speculative exercise. In order to effectively confine only a part of the population without affecting the whole, it would require at least one of the following conditions:
1. There should be a trustworthy authority that values freedom of non-prisoners and manages the AI-jailer algorithm in order to guarantee it is not applied to non-prisoners.
2. Or, it should be possible for non-prisoners to opt out from the practice of being monitored or of giving away their behavioural data and still enjoy a quality of service of the same level as those who do not opt out.
3. Or, it should be possible for non-prisoners to control and tweak the parameters employed by their recommendation algorithms, search engines, fitness apps, and all sorts of human-supportive artificial intelligence programs they use, so as to still be free to determine their future behaviours.
Some might argue we are heading towards or somewhat already experiencing certain forms of AI Jails of late; notably ad tailoring from the likes of social networks, or movie recommendations from video streaming services, to name a few.