Digital Hallucinations

While it seems that the era of autonomous vehicles is very close, and it might be, we still have a lot of work to do on the technology. And then we have a lot more work to do on our culture, legal systems, etc. to adapt to it.

The passenger registers the stop sign and feels a sudden surge of panic as the car he’s sitting in speeds up. He opens his mouth to shout to the driver in the front, remembering – as he spots the train tearing towards them on the tracks ahead – that there is none. The train hits at 125mph, crushing the autonomous vehicle and instantly killing its occupant.

This scenario is fictitious, but it highlights a very real flaw in current artificial intelligence frameworks. Over the past few years, there have been mounting examples of machines that can be made to see or hear things that aren’t there. By introducing ‘noise’ that scrambles their recognition systems, these machines can be made to hallucinate. In a worst-case scenario, they could ‘hallucinate’ a scenario as dangerous as the one above, despite the stop sign being clearly visible to human eyes, the machine fails to recognise it.

Those working in AI describe such glitches as ‘adversarial examples’ or sometimes, more simply, as ‘weird events’.