Google announces a competition of attacks on computer vision algorithms
Recognition of images using neural networks is getting better, but so far researchers have not overcome some of its fundamental shortcomings. Where a person clearly and clearly sees, for example, a bicycle, even an advanced trained AI can see a bird.
Often the reason for the so-called "harmful data" (or "competitive elements", or "malicious copies" or even a bunch of options, because "adversary examples" never received the standard translation). These are data that deceive the neural network classifier, giving it signs of other classes-information that is not important and not visible to human perception, but necessary for machine vision.
Researchers from Google published in 3 rd3r312. work
, where the problem was illustrated by the example:
On the image of the panda, a "harmful" gradient was applied. The person on the resulting picture, of course, continues to see the panda, and the neural network recognizes it as a gibbon, because in those parts of the image, on which the neural network learned to determine the pandas, specially hinted signs of another class.
In areas where computer vision should be extremely accurate, and error, hacking and the actions of intruders can have severe consequences, harmful data is a serious impediment to development. Progress in the fight is slow, and GoogleAI (a division of Google investigating AI) decided to attract community forces and organize a competition.
the project page on Github .
It may be interesting