Blog

ProjectBlog: Is AI ethically wrong?

Blog: Is AI ethically wrong?


A little background into this question; I took part in my first Hackathon at the start of April this year as part of AI NI’s artificial intelligence for good hack. The aim of the hack was to use AI to solve problems while addressing two key points of how AI could impact human workforces, and the ethics surrounding our data, and how we use AI.

To answer the first question quickly I believe that AI will have an impact on human workforces but these people should embrace and learn to work alongside AI. As the saying goes work smarter not harder. Every industrial revolution has resulted in people becoming unemployed but also created jobs in return. The 3rd industrial revolution introduced IT into society and this sector now represents roughly 20% GDP. Not only this but it has advanced society in so many ways and has changed almost every aspect of our society, especially in the way we live and plan.

Now on to the main question; is AI ethically wrong and what are the issues surrounding this. Obviously machines don’t start out with a bias, these are traits that are introduced to them through human interaction. AI models are becoming a lot smarter with better predictions thanks to the likes of deep learning. These models are only as good as the data they are taught with. The best example of this was back in 2015 when Google photos labelled photos of Jacky Alciné and his girlfriend as gorillas.

This is a clear demonstration of how you need to train a model with great consideration and also how important testing can be to avoid PR disasters like this. If all models end up displaying traits like these I honestly believe that AI could end up having a negative impact on society, therefore being ethically wrong. So how can we stop these things from happening and also minimise machine bias?

I believe there needs to be clear guidelines put in place on the data sets you need to train a model, something along the lines of what Material Design is trying to do for mobile development. When gathering my first data set I had a look for guidance online with the result being a mishmash of articles on Medium and Stack Overflow. Since looking originally, Google has published a website with guidelines on responsible AI practices. This is definitely a good start but I still think some of the points are too vague and examples of each might help to clear things up.

Guidelines for Creating an Ethical AI Model

Dataset: Without the dataset your AI application isn’t going to work. This content is used to directly train your model to recognise the patterns needed for your app to run effectively. First off, you need to consider the kind of data you need and where you’re going to get it from, especially if your training is going to be ongoing. For me it was easiest when I looked at the app from the end users perspective and considered what results they wanted to see.

Does your dataset represent the desired outcome correctly? For instance if you need to identify a person have you trained your data to include all possibilities from a baby right up to a pensioner?

Another thing to remember when gathering datasets especially for image classification is that the model gives a prediction so you might want to consider training more data to exclude it at a later stage. An example of this was when I built a model to predict violence. I had to train hugs in this model as to the model it looked like there was a chance of violence and threw up a false positive.

If at all possible examine the content within your dataset although this mightn’t always be possible especially if you’re handling sensitive information. Not only could this be seen as a breach of data protection, but if caught you could lose the trust you’ve spent so long gaining from your valuable customers. If this is the case the best solution would be to study the source of your data and put policies in place to stop harmful content from entering your model and creating machine bias.

Testing: Once your model has been successfully trained it’s a very good idea to test it. Conducting unit tests at the development stage is great but where you’ll find the real bugs is with your users. They are the ones that are going to use your software in ways that you’d never have even considered.

It’s best to segregate a small number of your end users that will test the model before anyone else. From these people you should not only track the results but also ask them for feedback. Be careful at this stage to not lead these users into any answers. If they find any issues always go back and retrain your model then redeploy it to the same user group.

If you’re retraining your model regularly with user generated content it may develop issues. Always monitor your model and any feedback. I believe a good way of gaining model feedback is by placing a basic form beside any model content. This gives your users an easy way to report bugs without having to go looking for forms or emails because they’re not going to.

How far is too far?

AI can be a very powerful technology and because of this it should be treated with respect. However this isn’t always the case and the most common examples of this originates from China as the state has a very high level of citizen surveillance. The example I’m going to focus on here is based on the Chengdu Shuangliu airport in the Sichuan province.

This airport has kiosks in the departure lounge that can display your flight status and directions to your departure gate based only off face recognition. I first stumbled across Matthew Brennans tweet about this technology and it had my attention. In his tweet he mentions, “Note I did not input anything, it accurately identified my full flight information from my face!”. As someone familiar with the technology I set out to figure where this model gains its data.

Having tried to research the topic with little success, I’m having to speculate based off my own knowledge on this next part. I imagine this data is captured when you pass through the automated security gates that take you photo’s and match them against your passport and boarding pass information. I am aware that nearly all large airports throughout Europe and the rest of the world are now using AI to process your information for security and monitoring while passing through the self service security gates. How else is your captured data being used by not only the airport but also government agencies that operate in and out of these places?

In my personal opinion, I think not being informed is extreme. I think I’ll finish things up here and leave you to come up with your own opinions. I’d love to hear what you have to say and as always if you have any questions you can leave them below or find me on twitter: https://twitter.com/AngryCubeDev

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social