From self-driving cars to smart surveillance cams, society is slowly learning to trust AI over human eyes. But although our new machine vision systems are tireless and ever-vigilant, theyâ€™re far from infallible. Just look at the toy turtle above. It looks like a turtle, right? Well, not to a neural network trained by Google to identify everyday objects. To Googleâ€™s AI it looks exactly like a rifle.
This 3D-printed turtle is an example of whatâ€™s known as an â€œadversarial image.â€ In the AI world, these are pictures engineered to trick machine vision software, incorporating special patterns that make AI systems flip out. Think of them as optical illusions for computers. You can make adversarial glasses that trick facial recognition systems into thinking youâ€™re someone else, or can apply an adversarial pattern to a picture as a layer of near-invisible static. Humans wonâ€™t spot the difference, but to an AI it means that panda has suddenly turned into a pickup truck.
Researching ways of generating and guarding against these sorts of adversarial attacks is an active field of research. And although the attacks are usually strikingly effective, theyâ€™re often not too robust. This means that if you rotate an adversarial image or zoom in on it a a little, the computer will see past the pattern and identify it correctly. Why this 3D-printed turtle is significant, though, is because it shows how these adversarial attacks work in the 3D world, fooling a computer when viewed from multiple angles.
â€œIn concrete terms, this means it’s likely possible that one could construct a yard sale sign which to human drivers appears entirely ordinary, but might appear to a self-driving car as a pedestrian which suddenly appears next to the street,â€ write labsix, the team of students from MIT who published the research. â€œAdversarial examples are a practical concern that people must consider as neural networks become increasingly prevalent (and dangerous).â€
Labsix calls their new method â€œExpectation Over Transformationâ€ and you can read their full paper on it here. As well as creating a turtle that looks like a rifle, they also made a baseball that gets confused for an espresso and numerous non-3D-printed tests. They tested it against an image classifier developed by Google called Inception-v3, which the company makes freely available for researchers to tinker with. (And to be clear, this problem is not specific to Inception-v3 â€” itâ€™s endemic to machine vision systems of all kinds.)
The research comes with some caveats too. Firstly, the teamâ€™s claim that their attack works from â€œevery angleâ€ isnâ€™t quite right. Their own video demos show that it works from most, but not all angles. Secondly, labsix needed access to Googleâ€™s vision algorithm in order to identify its weaknesses and fool it. This is a significant barrier for anyone who would try and use these methods against commercial vision systems deployed by, say, self-driving car companies. However, other adversarial attacks have been shown to work against AI sight-unseen, and, according to Quartz, the labsix team is working on this problem next.
Adversarial attacks like these arenâ€™t, at present, a big danger to the public. Theyâ€™re effective, yes, but in limited circumstances. And although machine vision is being deployed more commonly in the real world, weâ€™re not yet so dependent on it that a bad actor with a 3D-printer could cause havoc. The problem is that issues like this exemplify how fragile some AI systems can be. And if we donâ€™t fix these problems now, they could lead to much bigger issues in the future.