MIT is home to "Norman", a new artificial intelligence named after a character from Hitchcock's 1960 cult film "Psycho". This AI was purposefully trained from the very beginning to recognize scenes of violence, death, murder and suffering of people, including variants in a veiled form. The materials were drawn from numerous publications on entertainment portals, where documentaries are mixed with memes, drawings and cartoons.
Norman is distinguished by "deep deduction", his algorithms force the neural network to create a description, highlight the key details for each scene. The textual information is then used in combination with graphical information to analyze new images. As a consequence, if Norman sees a photo of a police officer and a child with ice cream, he may conclude that this is a psycho in disguise who is giving the child poison. Norman doesn't care what is shown in the picture, he gives everything a dark, frightening interpretation - that's what he was created for.
The AI psychopath is designed to oppose other AIs. Unlike image recognition systems, Norman can see the hidden meaning in the picture, its emotional color. But only in a negative light, and this was done for a reason. After the failures of the search bots Google and Facebook, which could not resist the sarcasm and cunning of Internet trolls, the problem of training new neural networks arose. And the sinister Norman serves as an illustration of her.
In short, he is a clear example of the concept "what you teach, it will come out." If algorithms for detecting racism or political crimes are designed by a hidden racist or anarchist, they will be deliberately incorrect. And a neural network trained on them will make gross mistakes. Norman's answers sound crazy to the general public, he thinks like a real psychopath, but there is a specific logic behind his judgments.