Blog: A.I. — An Inconvenient Friend
Part 2: Time & Self-Reliance
I last left off hinting at the peculiar prospect that A.I. presents to us — not necessarily doomed and gloomy sentiments about marketing or control, but just the simple idea of humanity having to share sentient space with a new form of life. Well, things get more interesting when you look at it from the perspective of our own self-reliance and sufficiency.
We seem to relegate our desire for the ease of technology towards its time-saving capability. We love things that save us time, because we inherently recognize that time is valuable to us. Though, in our constant push towards efficiency, we’re not really doing all that much with this time we’ve saved — at least on a macro scale of human contentment.
Life just keeps getting busier somehow, despite the tremendous ways we’ve sped up the things around us. In this way, time is also part of the equation that we must factor in about the inherent value of A.I.
It may be a good thing that certain challenges are becoming monumentally easier than they used to be. Think of it this way: try to navigate a new city with nothing but a map — to find where to go, to determine what activities and excursions ought to be experienced without the internet telling us how to get there, when to go, why to go in the first place. Things become hazy without our smart devices to guide us, hazier yet without the internet.
Though, this slow erosion of human challenge may also prove detrimental to our own capabilities, which collectively function as a muscle that gets better the more it’s exercised. Artificial intelligence will pick up the ball where technology has left it upon reaching its own limit: we won’t have to think for ourselves anymore, at least not in a traditional sense.
To wonder, to problem-solve using the minds astounding capacity, to contemplate and to try and solve a problem without Google. It’s still happening, and will continue to happen without a doubt, but in a different way altogether.
One could certainly say that it’s a complete waste of the minds potential to solve rudimentary problems that can now be solved with A.I., and that we now have more time to spend thinking about things that really matter (though any argument made for the sake of time-saving is sort of a fallacy later on, as will be detailed).
But the brain’s innate capability of wondering and commiserating — these are natural gifts bestowed upon us by evolution that have been forged and tested over eons of time. Only now are our minds potentially reaching a turning point, whereby they can begin to regress in terms of the effort output required to navigate through life.
There are certainly the pros — optimists are salivating at the sheer possibilities of A.I. having the capacity to solve quandaries that we humans seem unable to get around, solving dilemmas in the blink of an eye that would otherwise take us generations to decipher. Mathematical, social, medical, logistical, astrophysical — there is endless work for our technological successors to get their hands on and it’s a reverently enticing prospect to consider that A.I. may be able to generate effective measures by which we can reverse climate change, establish sustainable habitats on other planets, maximize medical efficiency, etc.
But what about us? It seems a rather anti-climactic fate to have A.I. take the wheel while we sit back and watch Netflix. Is it even possible to reach such a point? A.I. has to be programmed by us, after all, but these questions are more so aimed at the general population who doesn’t have to necessarily worry about programming their artificially-sentient sidekicks. The average person — they no longer need to calculate, to plan routes, to build logos from scratch, to code, to decide what products they want to buy, to be selective or intuitive, resourceful or enterprising.
While it seems a dark picture, and maybe a bit dramatic, there’s a silver lining that can be found, one that’s obscured by the hazy workings of time.
Time plays a much more important role than it would ostensibly seem. The generally optimistic equation looks something like this: A.I. will afford us more time, and this time is something we can then use to pursue the things we want to be pursuing. In other words, humans will finally have a chance to be human, to pursue their passions. Really, this has been in the works for centuries — we no longer have to toil arduously during every hour of daylight under the thralls of serfdom. But is this really the way we’re headed?
Will the general population truly begin to dip their toes in the waters of liberation (from monotontous and menial tasks that A.I. can now handle) and enjoy free time? Or will profitability find a new wealth of prospect?
Too many things seem to be attributed to time-saving when saving time seems to offer no actual reward — in the long run, anyway.
Saving time, on an individual level of lively fulfillment, has not really made things that much more helpful for us — more convenient, yes, but not necessarily more fulfilling in terms of living out a life purpose, in terms of contentment and happiness. One could say that, tangibly, much has changed while, intangibly, little has materialized over the last hundred years. In fact, if you wanted to be truly pessimistic about technology, you could say that it has only made things more chaotic and unnaturally busy.
Standards are higher than they were long ago, things are moving faster, people are smarter, streets safer. But are people happier?
Regardless, it should at least be wondered if the ease of timely convenience is comparable, in terms of true value, to all the other jeopardies that accompany it— socially, existentially, experientially. In other words, is it truly more beneficial to us to have artificial intelligence hold our hands and make our lives simpler and faster when it could threaten to disrupt the architectonics of our language, our social strata, our self-sufficiency and our own autonomous talent?
Of course, I’m being a little dark to make a point. But, under the right light, it does begin to seem like some weird wishful curse is unfolding, whereby we’re slowly losing our autonomy by granting more of it to artificial intelligence.
There’re a lot of gripes to be had with artificial sentience, at this stage anyway. Right now, the majority of it seems to fall under the context of data accumulation and marketing endeavors — certain keywords in a household are being logged and cataloged, and as the vault of all words spoken in a daily life is bursting at its seams, marketers and market researchers are there to catch the eruptions of profitability, to consume that which they will then regurgitate as something to be consumed. You can say we’re being baby-birded; getting what we need, spoon or beak — fed to us by artificial sentience. Though that’s a story that’s been done money times over.
Weaving all of this together isn’t necessarily the lack of privacy, which can be debated at length under the context of other discussions for other times, and it’s also not really about the presence of A.I. (because this so far hasn’t shown itself to be overly or overtly harmful to us as of yet) it’s about the odd and eerie supposition that we’re beginning to, more and more, share the space of sentient existence with a new form of life, a non-biological entity that possesses an increasing influence over us. It’s this fact that is proving a strange pill to swallow.