Alex Hern 

Shotgun shell: Google’s AI thinks this turtle is a rifle

MIT researchers managed to confuse artificial intelligence into classifying a reptile as a firearm, posing questions about the future of AI security
  
  

Is it a turtle? Sure – unless you’re Google’s AI.
Is it a turtle? Sure – unless you’re Google’s AI. Photograph: Labsix/MIT

If it is the shape of a turtle, the size of a turtle, and has the patterning of a turtle, it’s probably a turtle. So when artificial intelligence confidently declares it’s a gun instead, something’s gone wrong.

But that’s what researchers from MIT’s Labsix managed to trick Google’s object recognition AI into thinking, they revealed in a paper published this week.

The team built on a concept known as an “adversarial image”. That’s a picture created from the ground-up to fool an AI into classifying it as something completely different from what it shows: for instance, a picture of a tabby cat recognised with 99% certainty as a bowl of guacamole.

Such tricks work by carefully adding visual noise to the image so that the bundle of signifiers an AI uses to recognise its contents get confused, while a human doesn’t notice any difference.

But while there’s a lot of theoretical work demonstrating the attacks are possible, physical demonstrations of the same technique are thin on the ground. Often, simply rotating the image, messing with the colour balance, or cropping it slightly, can be enough to ruin the trick.

The MIT researchers have pushed the idea further than ever before, by manipulating not a simple 2D image, but the surface texture of a 3D-printed turtle. The resulting shell pattern looks trippy, but still completely recognisable as a turtle – unless you are Google’s public object detection AI, in which case you are 90% certain it’s a rifle.

The researchers also 3D printed a baseball with pattering to make it appear to the AI like an espresso, with marginally less success – the AI was able to tell it was a baseball occasionally, though still wrongly suggested espresso most of the time.

“Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought,” the researchers wrote. As machine vision is rolled out more widely, such attacks could be dangerous.

Already researchers are examining the possibility of automatically detecting weapons from CCTV images. A turtle that looks like a rifle to such a system may merely cause a false alarm; a rifle that looks like a turtle, however, would be significantly more dangerous.

There are still issues for the researchers to iron out. Most importantly, the current approach to fooling machine vision only works on one system at a time, and requires access to the system to develop the trick patterns. The turtle that fools Google’s AI can’t pull off the same attack against Facebook or Amazon, for instance. But some researchers have already managed to develop simpler attacks that work against unknown AIs, by using techniques that have general applications.

AI companies are fighting back. Both Facebook and Google have published research that suggests they are looking into the techniques themselves, to find ways to secure their own systems.

 

Leave a Comment

Required fields are marked *

*

*