Blog: Artificial Intelligence: How Much Power Should it Have?
AI is everywhere. From our toasters, to our drones, to even our criminal justice systems, AI has played a significant role in our lives. Many areas that once were controlled and monitored by humans have been delegated to AI. However, as AI begins to play a greater role in more significant decisions, we must consider how our human biases can seep into the AI tools that then perpetuate those very biases and lead to an even more inequitable society. When the data that is used to create these AI systems lacks sufficient diversity, or when the data itself is biased, it leads to AI systems that reflect the inequities that are prevalent today, rather than help create a truly smarter, more fair society.
An example that exemplifies this seeping of human biases into an AI machine learning system is the ‘Beauty.AI’ scandal that led to an AI system giving preferential treatment to lighter skins in an online beauty pageant. In this scenario, the bias was present in the making of the AI system, as there was a lack of diversity in the data that was fed into the system. As another example, Google’s image detection algorithm automatically labeled two African-Americans as ‘gorillas’, again due to a lack of diversity in the data sets present in making the machine learning systems.
When this starts posing serious problems is when AI is used in assigning ‘risk assessment’ scores that judges use to make their decisions. Given the unequal treatment of minorities today in the criminal justice system, the AI system that is implemented uses the current biased data, leading to a perpetuation of the inequities present today. Though it does present some efficiencies by eliminating the delays present in bureaucracy and the inevitable slowdowns when using humans for more trivial tasks, the downsides of spreading the biases, instead of leading to a equitable society are significant enough to warrant a reassessment of the efficacy of their use.