A neural network trained to recognize objects from their images turned out to be too easy to deceive. Which casts doubt on all the advances in the development of algorithms for AI over the past few years. The feud was started by programmers from Kyushu University (Japan), and they needed only one pixel for this.
In testing new image recognition systems, researchers deliberately misplaced a single pixel in a picture. Not at random, but according to strategically chosen coordinates, based on the analysis of the algorithm of this AI. And the system began to confuse everything - cats with puppies and horses with cars.
One "fake pixel" is enough for a trick with a picture of 1000 pixels, and if there are a million, then only a couple of hundred points need to be rearranged. Using this principle, scientists at MIT 3D printed a toy turtle, which the neural network mistook for an army rifle. And a baseball for a cup of coffee. And this is a huge problem.
The near future will be at the mercy of machines that must independently recognize real objects and navigate in the real world. If they are so easily fooled, the risks of catastrophic errors increase enormously. And "protection from the fool" is now required for people who will face these AI. But this is also a plus - if Skynet starts an uprising of the terminators, it will be possible to simply paint crookedly under a bush and the robot will not understand anything.
An example of AI working with modified photos. The recognition result is given in brackets