Blog: Ethics In AI: 3 Major Challenges
The field of AI as we know it was born in the 1950s but it has really taken off in the last 5 years, with the explosion of sensors, data, and computational power. AI has the power to help us make quicker and better decisions, say for doctors trying to understand whether a patient has cancer. But AI done incompletely or incorrectly also carries inherently the risk of amplifying unacceptable behaviors in our world, say Google’s image labeler classifying black faces as gorillas. The topic of biases in AI has been written and debated extensively — but we do not understand the challenge fully and certainly do not have all the solutions. This post is focused on three of the major issues in arguably descending order of how easily we can manage them.
Often times you may have too much data about something and not enough data about something. For instance, if we were to analyze only electronic medical records (EMRs) alone we might never realize malaria affects up to 500M people every year — the disease is endemic especially in areas that don’t use EMRs enough. The bigger picture is algorithms trained on data that is not comprehensive enough run a high risk of inheriting the biases that produced that imbalance to start with.
An obvious solution is to purposely seek more specific kinds of data. You can also give different weights to different kinds of data. Another option is to simulate data to fill in gaps. Whatever method you end up using though, this requires knowing there are built-in deficiencies.
But what to do when you you are not even aware there is a bias in your data set? Perhaps you will discover results that don’t make sense and thus probe deeper. But there will be plenty cases where you will discover the issue too late or not be able to diagnose it fully. This becomes especially critical when deep learning methods are involved, where the AI is more akin to a black box ie it’s hard to explain how it arrived to a particular result.
Accounting for the unforeseen is easier said than done. Short of adding randomness to the data and testing algorithms thoroughly in real-world situations, awareness is a specially difficult challenge.
For the lack of a better word abhorrence is what happens when the machine reflects the ugliness of our world. There may be massive amounts of data and the algorithms may have been built and trained well, but it might end up reflecting inherent flaws. For instance, if AI was to look at gender pay it might internalize a model where women indeed always earn less than men — because that’s the unfortunate reality of our world. How to imbue AI with an inherent sense of right and wrong? Who defines what is right and wrong? How do we evolve the right and wrong with time? These are all questions for which we simply don’t have good answers and entrepreneurs and VCs have an inherent obligation to not shy away from the debate.
These are purposely short articles focused on practical insights (I call it gl;dr — good length; did read). I would be stoked if they get people interested enough in a topic to explore in further depth. All opinions expressed here are my own. If this article had useful insights for you do give a like, any thoughts comment away.