Blog: Using a Simple Printed Picture to Fool Human-Detecting AI Systems
Artificial intelligence, specifically machine learning for computer vision, relies on training. The system is given a set of photos to learn from that contain whatever object its being trained to detect. That set could include anywhere from dozens to many thousands of pictures, and the more pictures there are in the training set, the more accurate the system becomes. After training, it can then analyze new photos or videos to see if they include that object. But researchers from Belgium’s University of KU Leuven have found a remarkably simple way to fool a system trained to detect humans.
Such artificial intelligence systems are already being deployed in the real world, both by law enforcement agencies and in the commercial sector. At their most benign, they are used to simply count how many people enter and exit a store, for example. But they can also be used to try to locate and track a specific person, or even a specific kind of person — a fact that has raised concerns about profiling from citizen rights advocacy groups. As it turns out, those artificial intelligence systems aren’t as sophisticated as many have been led to believe, and the researchers have found that they can be fooled with just a printed photo.
In this case, the photo is of people holding umbrellas, which has been digitally altered to make it less clear. All someone has to do is wear that photo somewhere around their lower torso and they become undetectable — at least on the YoLo(v2) AI system that the technique was tested with. It works because the system sees the photo as an unknown entity, and one that isn’t part of what it considers a “human.” You and I can immediately recognize it for what it is, but the AI can’t. This particular vulnerability would be easy to enough to fix, but it does illustrate how easy it is to fool an AI by confusing it with unexpected imagery.