We have featured a lot of articles about image recognition technology and how computer scientists and researchers are making use of imagery in a variety of novel ways to teach and craft artificial intelligence for various purposes from simple object recognition to the ability to edit photos on its own.
Now a team of scientists at Cambridge, Massachusetts premier institution of higher learning the field of technology, MIT, have crafted an artificial intelligence that can be safely deemed a psychopath given its tendency to identify photos in the most twisted ways possible.
But don’t worry, it’s not the AI’s fault – this is a story about nurture, not nature.
That’s because the team at MIT used images from an unidentified sub-Reddit focused on disturbing imagery to train the artificial intelligence in hopes of seeing what the impact on its development would be.
And they were not disappointed.
The violently inclined AI, called Norman after the star killer from Alfred Hitchcock’s classic horror movie “Psycho,” is a study in biased artificial intelligence or, rather, how it develops. The team behind Norman seeks to prove that often it isn’t the AI exhibiting bias but merely reflecting the biased material it was fed and upon which it bases its algorithms. Norman’s indoctrination using decidedly “biased” material, and violent stuff at that, was in contrast to the more general approach teams use in developing regular AI. Again, this was to demonstrate the need for a broad approach – and the consequences of foregoing it.
As a cursory glance at the provided chart below reveals, Norman not only exhibits disturbing classification habits but also is remarkably different from the AI that received standard training. We can only hope that, when the robots finally take over, their AI is more of the standard set and not the biased version that Norman represents.