a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: AI and the Human Bias

Blog: AI and the Human Bias


An average modern day involves the usage of many elements of state-of-the-art AI concepts and applications. One such element is the group of virtual assistants that one uses on pretty much a daily basis. These virtual assistants undergo training and are built on giant corpora of human generated data; data that has been generated by humans. This simple fact is quite problematic.

It is undeniable that AI will improve, rather, has been improving our lives in many respects. However, it also poses serious threats and risks. It relies on algorithms which learn from real world data which makes it highly possible that AI can inadvertently reinforce human bias. The issue is not AI. Not as people perceive it to be: a technology so advanced it can lead to the apocalypse. The issue is very much rooted in today’s world and more so in the past, how we build the AI. There is a very real risk that instead of solving the problem of already existing biases, AI will only exacerbate them.

They say to err is to human. That is exactly the problem that we currently face in regards to developing artificial intelligent mechanisms to deal with today’s pressing issues. One cannot argue that humans are not biased; as a matter of fact, in doing so, one does portray a sense of bias. The premise simply becomes the conclusion, which is logically a fallacy. It is human nature to be biased, to take actions based on pre-existing notions. Of course, there is nothing inherently wrong with being biased, it occurs rather naturally for humans. However, these biases become problematic when the consequent actions are used to propagate a negative sentiment to another entity to such an extent that it causes conflict. It isn’t unknown that humans have not really been peace loving throughout history. So, more often that not, human biases take the form of ways to justify suppression over another group of humans. They become a means to perpetuate dogma, a medium to bend the reality in one’s own favour and at someone else’s cost. When these elements of AI learn from the biased data that humans provide, they acquire that bias, and oftentimes amplify them thanks largely to their usability and complex sophistication. One must realise the simple fact that machines don’t ‘want’ something to be true or false, they learn it through the data provided by humans.

A few years ago, Danny Sullivan, a search engine expert asked his Google Home device, “Are women evil?” the device, with its cheerful female voice replied, “Every woman has some degree of prostitute in her, every woman has a little evil in her.” It was found that the device was quoting a misogynist blog, Shedding the Ego. Google later responded that their search results are “a reflection of the content across the web.” Although Google has since changed the response to the question, it does raise some serious issues on how to train AI so that it does not perpetuate the existing human bias. These biases can render the interpretation of machine learning data pointless.

These problems can intensify in the developing countries, where a sudden growth spurt in the access to technology can cause a larger tendency among people to rely heavily on these machines. Along with the already existing marginalisation, suppression, and in some extreme cases, human rights violations, these assistants can lead to the formation of an echo chamber of sorts wherein one’s pre-existing notions might be conformed by the piece of technology that one can access. In this way, we as a community, are simply obfuscating our flawed beliefs into a black box to spread around and it is a big concern. It makes one wonder if the world is headed back to the past.

Another such example is the inability of many algorithms to recognise certain dialects as part of speech or text. This has been the case with the dialect of the larger family of American English, the African American Vernacular English (AAVE). Historically, the speakers of this dialect were overlooked as uneducated and this stereotype is reflected in the various available textual records. As a consequence, it perpetuates a sense of alienation among the minority and unknowingly keeps these communities marginalised. Natural language processing tools for social media may be used for, say, sentiment analysis applications, seeking to measure the opinions from online communities. But the current tools themselves are based on tradition written texts, which are largely different from the general, non uniform language used in social media; and even more different from the dialectal forms of these social media languages. Not only does this imply that social media language processing tools will generally have a lower accuracy, but since language can vary across groups, these tools will most likely be biased — incorrectly representing, or completely eliminating the ideas and opinions of the people who have the tendency to lean towards the usage of more non-standard language dialects.

This is dangerous not only because the automated phone systems cannot understand and take input in AAVE (or other non-standard language forms), but also because they are used to collate public opinion based on things they read on social media. AAVE is one of the largely used slangs on Twitter. The inability of these systems to understand the dialect leads to marking a large portion of this information as not useful, or in some cases, hostile. And because these are automated, this happens at a very fast pace. So basically, we are facilitating and accelerating the process of marginalisation and inculcating a sense of hostility among various communities because of the human bias. Even if the algorithms are close to perfect and we get outputs that seem to be unchanging, our cognitive biases make the interpretation of data unreliable. Everyone has these biases to some degree— which makes it concerning that there has been little research on how they affect data interpretation.

Unfortunately, there is not a general solution to this issue. Rather very many case specific solutions are being developed. A group of researchers is working on a nutrition based approach in which it seeks to provide the nutritional value (a reliability score) of the search results. Another such group has been able to identify 20 different biases and have successfully altered the search results by tweaking the algorithm in a specific manner to remove the bias in the data. However, the problem at hand isn’t that of a reliability contest. And neither does a vast collection of textual records, rooted in flawed philosophy, all of a sudden become flawless simply because 20 biases are identified. This task becomes more difficult for us, since mostly, we don’t even realise that we are being biased, let alone find its consequences.

Source: Artificial Intelligence on Medium

(Visited 5 times, 1 visits today)
Post a Comment

Newsletter