a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: What I Learned from Trying to Make a Lie Detector Using a Neural network

Blog: What I Learned from Trying to Make a Lie Detector Using a Neural network

http://bit.ly/2WXpX4V


A spectrogram of a lie

Over this weekend I tried to build a lie detector that would take the spectrogram of some audio and then decide whether it was a lie or not.

Going into this experiment, I was quite convinced that there was no way this would actually work. So I did the usual, I collected my data, I cleaned it, and made a training and validation set. Now I will admit that the method of data gathering I chose, recording my voice say different truths and lies, was not the most scientific but for a home experiment, it worked fine. At this point, all I was focused on was whether or not it would work.

This led me to forget the most important question. What happens if it does work?

Uh Oh

What happened?

Clearly, a lie detector isn’t as big of a problem as people bringing dinosaurs back to life, right?

Time to answer that question, but first we have to look at the results of the experiment.

I trained the network and the results were very shocking to me.

Hmmmm

Of course, the error rate should have been very small, considering my dataset was small, and since all of the files came from the same source. So I dismissed this as a case of overfitting.

Unknowingly, my mindset had shifted from wanting this network to succeed to wanting it to fail. Why? Probably because I realized that was definitely not an ethical thing to do.

Now comes the really interesting part. I fed it audio files of myself of different lies and truths, and it identified the file correctly every single time. If this was a normal neural network, I would be absolutely elated. This time, however, I felt an intense amount of apprehension.

I decided to test it on other people’s voices and the results were just a tiny bit better than a human guessing whether something was a lie or not. I felt a lot of relief, but why?


What is the problem?

You might be asking, why is a lie detecting neural network a problem? I mean, polygraphs exist and those are fine.

Yes, but polygraphs can’t become web apps with always listen modes on. Polygraphs can’t become Alexa skills to infiltrate the homes of people across the world. Polygraphs can’t take in information continuously while far far away from the subject.

Making neural networks do interesting tasks has become incredibly easy, but with that also comes the lack of thought as to what the network actually does. We rush to make it because of how cool it but we forget to ponder the ethics of the action.

Now, this isn’t a Skynet level problem, but just like all tech, neural networks can be used by elements of society that don’t exactly have our best interests at heart. That’s why policing AI and Machine Learning becomes so important. It would be incredibly easy for someone to wreak havoc with a seemingly harmless app if there isn’t a proper way to police neural networks for harmful intent.

Of course, although I overfit this version of a lie detector, other versions have been iterated by multiple researchers throughout the past and iterations will continue to be made. It isn’t a question of whether we can do it, because we definitely can, it is a question of how to do it ethically.

After all, humanity’s moral compass is one of its finest traits.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter