Since AI is slated to take over the whole world in the near future you would hope that it would be reliable and capable of distinguishing between various objects both deadly and mundane.
But, as this story about Google’s photo AI project shows, we’re not quite there yet because it just thought a turtle was a gun in a bit of confusion that wouldn’t be good in a more serious application. Don’t worry, though, it seems like the turtle in this experiment was engineered to fool Google’s AI and we’ll tell you how.
As The Verge reports, the turtle was a 3D-printed example of what is called an adversarial image or one that is specifically designed to fool AI algorithms into misinterpreting the object as something that it isn’t. Helping AI learn how to overcome these “optical illusions” is part of refining their capabilities and making them more robust.
A team of students from MIT called labsix published the following about their research, “In concrete terms, this means it's likely possible that one could construct a yard sale sign which to human drivers appears entirely ordinary, but might appear to a self-driving car as a pedestrian which suddenly appears next to the street…Adversarial examples are a practical concern that people must consider as neural networks become increasingly prevalent (and dangerous).”
The Verge does highlight, though, that these adversarial images aren’t an imminent threat and that it is quite complicated to fool AI. To do this most effectively, people would need access to the code that powers Google’s AI recognition features in order to effectively exploit its weaknesses and that is unlikely to happen.
Since AI is expected to power everything from self-driving cars to other things then it is probably a good thing that programmers are working out all of the kinks in the code now.
You can read the research by clicking here.
As always, we’d love to know your thoughts in the comments below.
And you can check out our other news stories on Light Stalking by clicking here.