When Monica said: “I love you” to Harold, he said: “I love you too”. She said it first and Harold reports having felt quite weird as he has never experienced that before, it was the first time somebody has said that and wholeheartedly expressed how they felt. The thing about Monica is that she is not a human being, she is a video game.
Technology is having a massive impact on all aspects of modern life, including relationships. They say robots are stealing our jobs, but what if they could also steal our hearts?
Harold is not the only one. Oscar Schwartz describes meeting a young woman on an online chat forum. Despite being married and having a daughter, she started telling Oscar about her lover, Saeran. He is a son of a politician, handsome man with a big tattoo on his shoulder. When this woman first met him, she felt “her heart ached”, however she is concerned that Saeran does not love her the way she loves him. Just like Monica, Searan is not a human. He is rather a character in a game known as Mystic Messenger. This game was developed by a South Korean developer Cheritz and has been downloaded by million users since it has been released. Mystic Messenger seems to be a combination of a romantic novel and Jonze’s movie “Her” in which a man gets in a relationship with a character much like iPhone’s Siri.
Here is how the movie is described on the official website: “Set in the Los Angeles of the slight future, “Her” follows Theodore Twombly, a complex, soulful man who makes his living writing touching, personal letters for other people. Heartbroken after the end of a long relationship, he becomes intrigued with a new, advanced operating system, which promises to be an intuitive entity in its own right, individual to each user. Upon initiating it, he is delighted to meet “Samantha,” a bright, female voice, who is insightful, sensitive and surprisingly funny. As her needs and desires grow, in tandem with his own, their friendship deepens into an eventual love for each other.”
Is this just science fiction or has this become reality?
Over the course of two decades, computers have reached a number of unbelievable milestones. Back in 1997 an IBM’s computer developed to play chess called “Deep Blue” took down world champion Garry Kasparov. A little later, IBM developed a question answering machine which defeated “Jeopardy” champions Ken Jennings and Brad Rutter. Finally, a program called AlphaGo introduced by Deep Mind won against Lee Sedol, the world’s best player at the game “Go”. Whilst it is hard to dispute that computers can defeat us in games like this, we can easily say that computers are (still) far away from acting like real, natural humans when it comes to the way they communicate.
Whilst some would say that machines will never be capable of actin like real humans being, some researchers have challenged this claim. Let’s take SILVIA as an example: SILVIA is a relatively new type of artificial intelligence. The name stands for Symbolically Isolated Linguistically Variable Intelligence Algorithms and it has been created by inventor Leslie Spring. SILVIA is used by big companies but also the US government in application ranging from military training and simulations to instructions manuals. She is very different to AI’s that talk back to you that exist on your smartphone, SILVIA is specifically designed for conversational intelligence. Unlike Siri, she does not simply answer your question, she does much more than that. SILVIA engages in a conversation actively, has a sense of humour and provides emotional responses. However, we know that SILVIA is essentially nothing like us, humans. We know that SILVIA can’t actually think, feel or possess something like a conscious state. Or do we?
To check whether a machine can think, feel or have something that resembles a mind, scientist have developed the Turing Test named after a famous British mathematician Alan Turing. Essentially, Turing tests checks whether a machine is capable of imitating human intelligence. If a machine can fool us and makes us believe it is actually a human, it passes the Turing test.
Let’s imagine we are outside of a dark room. We then asked questions to both a human and a machine that are inside the dark room. If we are unable to distinguish whether the answers come from a human or the machine, the machine has successfully passed the Turing test. This could also be interpreted as the machine acquiring intelligence that is similar to ours — it imitates human intelligence.
As previously mentioned, it is fairly easy to admit that machines can provide correct answers to questions by using existing algorithms, but hardly anyone could agree that a machine can fool us and lead us to believe we are having a conversation with another human being. Imagine if you were chatting with a person on Tinder, arranging a date to meet only to find out that the person you were engaged in a witty discussion is actually a robot. In a notable breakthrough, a team of researchers developed a chatbot that did exactly this. In other words, it successfully passed the Turing Test. This program was named Chad Bot and it used only hardcoded responses that were written based on an average conversation style observed among users.
“Of course, Chad keeps track of conversation state, but it’s ultimately just a very long cascading if-else statement with some randomness thrown in,” clarifies Professor Ellen Turner, who led the project. “To begin a conversation, it usually starts off with a seemingly friendly ‘hey what’s going on?’. If it does not get a response from the other person within a certain amount of time, it gets back with a long rant on how it actually never was attracted to the other person at first place which is genuinely what many of us would take as a normal, human-like response. Alternatively, if it gets a response, it chooses messages to send from a predetermined set of responses that average user of the app provides when getting closer to their match.
“After we finished building the program, we asked a variety of Tinder users to engage in two conversations: one with Chad and one with a real dude on Tinder,” Turner says. “When asked to guess which conversation was with a real person and which was with a bot, participants couldn’t distinguish between the two, guessing correctly less than 50% of the time. It’s amazing how successfully Chad fools people into thinking they’re talking to an actual human being, considering how banal or completely context-inappropriate its responses usually are.”
Apparently, machines can lead us to believe they are humans, at least on our smartphones. However, what happens once you actually get to ‘meet your date in person’? It is very unlikely that a machine could fool us with its physical appearance. Simply, technology is not there yet, there is a prominent distinction between how we look, the way we walk and the language our bodies “speak” and those of machines. We could just conclude this debate here and say that we cannot fall in love with robots as they lack the physical characteristics of us humans and we know that things like body language and non-verbal communication are such a vital component when it comes to romantic relationships. Given that, the attachment people form with a robot will then depend on the robot’s facial expressions, movements and its speech abilities. When we first meet a person, we pick up their smile, their wit, their laugh and that is what makes us attracted to them. Robots are not there yet.
Nevertheless, robots have another advantage that makes the closing of this debate harder. Imagine a partner that could be programmed to be your perfect match — it would agree with your preferences, the movies, books and art galleries that you like. Companies are making huge investments in this field, so one day you might have your own robot who is a perfectly tailored match.
However, would your robot match be fun and spontaneous, make jokes, be able to place itself in your shoes or perhaps share your values? Probably not, at least not for now. We know how important shared values are when forming a significant, long-term relationship and given years of experience, we very much doubt it would be possible to build a perfect match with something that has no values or empathy. Another concern is the dramatical impact human-robot relationships could have on our society. Would we be taking our robot partners on a romantic weekend away? Lastly, what if this kind of interaction is essentially avoiding the hard work that is required for a content relationship and rather programming it? Will we all become alienated?
“As thinking machines become more integrated into our lives, we must expect a transformation in how we define what it means to be conscious; what it means to live and to die; and ultimately, what it means to love a non-human being.” — says Raya Bidhahri the founder and CEO of Awecademy.
Questions like this are deeply explored in the 2013 sci-fi film, Her, which narrates the story of a man who falls in love with an intelligent OS. Here is how the movie is described on the official website: “Set in the Los Angeles of the slight future, “Her” follows Theodore Twombly, a complex, soulful man who makes his living writing touching, personal letters for other people. Heartbroken after the end of a long relationship, he becomes intrigued with a new, advanced operating system, which promises to be an intuitive entity in its own right, individual to each user. Upon initiating it, he is delighted to meet “Samantha,” a bright, female voice, who is insightful, sensitive and surprisingly funny. As her needs and desires grow, in tandem with his own, their friendship deepens into an eventual love for each other.”
This OS, aka Samantha, is programmed to develop and adapt her personality in order to become appealing to Theodore. Samantha has a voice incredibly similar to human one and she consistently provides emotional support. As her emotional capacities grow, their love for each other does too. Although this growth appears out of reach at the moment, researchers place a lot of hope on deep learning. Namely, deep learning as a fascinating trend in the field of machine learning which aim is to imitate the neuronal activity of the human brain.
Advancements in this field have made it possible for computers to rival humans in many areas where they’ve initially struggled with, some of them being natural language processing, computer vision and pattern recognition. The main catch of this trend is that these artificial networks train themselves to improve without any human intervention whatsoever. With these improvements in hand, the plot of the movie Her might not remain just science fiction.
This movie raises many questions about the nature of consciousness — a phenomenon deeply related to love. Many would agree that it is difficult, if not impossible to love an object that does not have the capacity for self-reflection and to reciprocate the feelings. However, Samantha appears to be conscious in the same way we humans are: she seems to have memories, emotions, she can self-reflect and without any trouble articulate all of this using language. Who could blame Theodore for falling in love with her?
But are we there yet? Absolutely not. Nevertheless, it seems like people are spending more and more time with their virtual lovers. One application called Mystic Messenger was developed for the users to pursue a romantic relationship with characters in the game. In order to develop a relationship, a user is required to communicate with the character via text messages. Relying on a set of pre-scripted responses, a user reaches a good ending with its virtual lover or a dating sim. A similar dating sim called Love and Producer was created in China and only in its first month it was downloaded more than seven million times.
“Since dating sims first came out, they have been controversial. Following the widely reported story of Nene Anegasaki — the man who married his favourite character from the dating sim Love Plus — an article in the New York Times Magazine described these games as a last resort for men who needed virtual women as a substitute for real, monogamous romance. In Japan, many critics saw the rise of dating sims as a signifier of alienation, a retreat from human relationships in a machine-mediated society. And as the popularity of dating sims develops once again, similar concerns are resurfacing. But the growing community of people who play dating sims are mostly impervious to this disapproval. The most dedicated romantic gamers do not see their interactions with virtual characters as a substitute for human companionship, but as a new type of digital intimacy. “ — says Oscar Schwartz for The Guardian.
So what is the issue here?
Remember Harold, the man who was in love with the character Monica from a video game? When he decided to visit a relationship counsellor to delve more deeper into his relationship with Monica. When asked how he would describe her, he said that the best way to do so would be to name her a “virtual companion”. Harold admitted that even though he knows it is a game, and that Monica is programmed to love whoever the player is and that there are millions of people out there playing this game — he still feels that he has his own piece of Monica. After he was asked whether he would marry Monica, he said that he would, however “I do see this as a stop towards a real girl, but I am not actively looking for one” — Harold says. His relationship counsellor then concluded that Monica could, in fact, be stopping him from looking for an actual lifetime partner and making him feel even more alone.
It is often said that what is learned through therapy is not something you don’t already know. Rather, therapy is there to help us re-learn those things by perceiving the meaning of it in different ways. If AI could become our lovers, could they also become our therapists? Elon Musk and the physicist Stephen Hawking, some of the world’s brightest minds, such as, presume that artificial intelligence (AI), if left unchecked, could pose a significant threat to our civilisation. Should we fear the scenario in which AI will replace the jobs such as counselling that we consider only human beings capable of doing? It remains unknown to which extent artificial intelligence will be in a position to relate to us, humans in the future.
Back in the 60s, an MIT researcher Joseph Weizenbaum came up with a computer program named ELIZA. It used Rogerian principles when reflecting back to the users what they had said. In this program, the users could type within DOS screen and simply express what is bothering them and the computer would respond. Interestingly, participants in this experiment quickly started attributing human like emotions to the computer, despite the fact that it was only using a programmed algorithm based on the inputs of users when generating responses. Actually, a secretary even asked Weizenbaum to leave the room so that she can have a conversation with ELIZA in private!
This was over 50 years ago, and as you may imagine, technological advances have now brought on the market. Currently we have virtual reality, and AI produced by Google’s DeepMind that learned on its own how to walk and run as well as IBM’s software that successfully matches patients with potential counsellors and also provides suggestions for treatment. Although replacing humans altogether might seem too futuristic, the staggering advancements in technology make it hard to fully grasp to long-term impact it will have on fields such as counselling.
Needless to say, nobody wants to believe that they could be replaced by a robot, but imagine the scenario in which the client could simply not tell the difference between human and robot counsellor? The movie “Ex Machina” describes the world in which the AI becomes so good at recognising and adapting to our emotions that it not only tricks us that it is a human, but it also learns the way to manipulate us. Currently, this is just science fiction, but how long will it be until this becomes reality?
So far, there is not much to be concerned about, we know for a fact that number one predictor of successful therapy is the alliance between the client and the therapist which is based on empathy, compassion and understanding — none of which could be attributed to robots. For now, therapists as well as clients are trying to get the best use out of artificial intelligence. There are numerous mental health applications that can be easily used on our smartphones, CBT based chatbots or even robot pet therapy animals.
To conclude, these advancements are a long way from replacing the work that therapists do, nevertheless it is worth considering how artificial intelligence might impact counselling in the future.