This AI called Norman was trained to think like a psychopathResearchers at MIT (Massachusetts Institute of Technology) Media Lab have developed an AI (artificial intelligence) who they claim is demonstrating thought processes that are closest to “psychopathic”.
The researchers have dubbed the neural network as “Norman” (named after Hitchcock’s Norman Bates in the 1960s film Psycho). Apparently, Norman’s computer brain was reportedly born out of an unusual training process, wherein the AI was warped by exposure to “the darkest corners of Reddit” during its early training, which ultimately caused it to develop psychopathic data processing tendencies. Eventually, Norman was dubbed “psychopath AI” after psychological tests disclosed patterns associated with psychopathic traits in humans.
The researchers describe Norman as, “A psychotic AI suffering from chronic hallucinatory disorder; donated to science by the MIT Media Laboratory for the study of the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.” Basically, the MIT team created Norman, as part of an experiment to see what training AI on data from “the dark corners of the net” would have on its worldview.
While Norman was trained on disturbing images of death culled from a group on the website Reddit, another ‘normal’ AI was trained on more benign images of cats, birds and people, as a part of the same study. After exposure to the images, both Norman and the regular AI who can interpret pictures were then shown inkblot drawings and asked what it saw in them.
The AI was subjected to Rorschach test – inkblot tests used by psychologists to better understand and assess a patient’s state of mind, i.e., whether he or she perceives the world in a negative or positive light.
While Norman’s view was continuously depressing, as he saw murder and violence in every image, on the other hand, the normal AI saw far more cheerful images in the same abstract images.
For instance, what the “normal” AI called “a group of birds sitting on top of a tree branch,” Norman saw as a man being electrocuted.
Or, when shown an abstract shape, where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.
In another one, where a normal AI saw “a black and white photo of a small bird,” Norman saw “man gets pulled into dough machine.”
Professor Iyad Rahwan, one of the three researchers who developed Norman told BBC, told, “Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”
According to the researchers, Norman represents a case study on the dangers of Artificial Intelligence gone wrong when biased and faulty data is used in machine learning algorithms. As a result, the AI is biased when trying to understand real-life situations. Hence, it would be wrong to blame the algorithm, as the training dataset used to create the AI is equally important. In simpler words, if AI is trained on bad data, it will itself turn bad.
Last year, a report claimed that custody decisions in U.S. courts were biased against black people, as the computer program used by the court had flaws in the training data.