New Artificial Intelligence Filter Tricks Facial Recognition Systems

Data privacy is a delicate issue and people will always need tools to protect their privacy. This was taken care of by software engineers from the University of Toronto, who developed a special filter for personal photos. Now you can distribute your images on the Internet freely, no AI will recognize that it is you on them.

The idea is simple and boils down to correcting individual pixels in the overall picture so that AI algorithms for face recognition fail. The interference is invisible to the human eye, but the machine cannot draw parallels between the two photographs. This does not guarantee you complete incognito, but the likelihood of being noticed by a spy bot on a social network and becoming the object of a personal attack is significantly reduced.

In order not to rack their brains over deception algorithms, the Canadians pitted two real powerful neural networks against each other. The first was given a set of several hundred different photographs of the same people and was given the task of recognizing and sorting by personality. Then the second neural network retouched them pointwise, trying to confuse the first, and the images were sent for re-recognition. The developers were left to observe what changes were made that gave a greater number of false positives - this is how the kernel for the filter was formed.

In its current version, the filter reduces the likelihood of AI to recognize a person from a photograph to 1 in 200. And such trifles as nationality, age or emotions are deliberately determined with an error. The authors of this technology promise in the near future to design the filter as a separate application for general use.