According to computer scientists at Stanford and Google, an artificially intelligent machine has learned to cheat its way through a designated task.

Loading...

The machine was built in 2017 to turn aerial images into street maps and back again, the point of which is to boost the efficiency and accuracy of Google Maps.

It involves a technology called CycleGAN, which uses neural networks (a generator and a discriminator) to transform an image from one type to another. The purpose of the generator is to “trick” the discriminator, and through practice the machine will get better and better at producing the desired outcome, at least so the theory goes. However, this time something far more interesting happened – it seems that this particular AI learned how to take a few shortcuts.

Scientists noticed something was up when some of the early results turned out to be a little too good. Specifically, they noticed that skylights removed in the process of generating the street map somehow managed to reappear when it was transformed back into an aerial image. This shouldn’t happen. If the machine had been doing its job properly, it would use data from the street map to recreate the aerial map. The street map does not include skylights. Therefore, the second version of the aerial image should not include skylights.

So, what exactly is going on? After some digging, the researchers discovered the machine was encoding data of the aerial map into the noise patterns of the street map on the down low. The code was so subtle that it would be invisible to the human eye. But on closer inspection, when the details had been amplified, it was clear that the machine had made thousands of tiny color changes indicating visual data that could be used like a cheat sheet when recreating the aerial image – hence the magically reappearing skylights.

While this sound like some impressive out-the-box thinking on the machine’s part, it actually turns out to be quite simple, almost predictable, when you think about what the machine was actually programmed to do. As Tech Crunch points out, the scientists may have hoped the machine would identify features on either map and then translate it into the style of the other, but what they were really testing was the similarity of the recreated aerial map to the original and the clarity of the street map.

Essentially, computers are very good at doing what you tell them to do – and in the past that has sadly included spouting racist and misogynistic nonsense. Here, it was much more benign and simply meant recreating the original image as closely as possible. And so to do that accurately and efficiently, the machine learned to encode certain signatures into the converted images, or “cheat”.

Loading...

LEAVE A REPLY

Please enter your comment!
Please enter your name here