TLAXCALA تلاكسكالا Τλαξκάλα Тлакскала la red internacional de traductores por la diversidad lingüística le réseau international des traducteurs pour la diversité linguistique the international network of translators for linguistic diversity الشبكة العالمية للمترجمين من اجل التنويع اللغوي das internationale Übersetzernetzwerk für sprachliche Vielfalt a rede internacional de tradutores pela diversidade linguística la rete internazionale di traduttori per la diversità linguistica la xarxa internacional dels traductors per a la diversitat lingüística översättarnas internationella nätverk för språklig mångfald شبکه بین المللی مترجمین خواهان حفظ تنوع گویش το διεθνής δίκτυο των μεταφραστών για τη γλωσσική ποικιλία международная сеть переводчиков языкового разнообразия Aẓeḍḍa n yemsuqqlen i lmend n uṭṭuqqet n yilsawen dilsel çeşitlilik için uluslararası çevirmen ağı

 22/03/2019 Tlaxcala, the international network of translators for linguistic diversity Tlaxcala's Manifesto  
UNIVERSAL ISSUES / Meet Norman, the world’s first psychopath AI
Date of publication at Tlaxcala: 28/06/2018
Translations available: Italiano 

Meet Norman, the world’s first psychopath AI

Kavita Iyer


This AI called Norman was trained to think like a psychopathResearchers at MIT (Massachusetts Institute of Technology) Media Lab have developed an AI (artificial intelligence) who they claim is demonstrating thought processes that are closest to “psychopathic”.

The researchers have dubbed the neural network as “Norman” (named after Hitchcock’s Norman Bates in the 1960s film Psycho). Apparently, Norman’s computer brain was reportedly born out of an unusual training process, wherein the AI was warped by exposure to “the darkest corners of Reddit” during its early training, which ultimately caused it to develop psychopathic data processing tendencies. Eventually, Norman was dubbed “psychopath AI” after psychological tests disclosed patterns associated with psychopathic traits in humans.
The researchers describe Norman as, “A psychotic AI suffering from chronic hallucinatory disorder; donated to science by the MIT Media Laboratory for the study of the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.” Basically, the MIT team created Norman, as part of an experiment to see what training AI on data from “the dark corners of the net” would have on its worldview.
While Norman was trained on disturbing images of death culled from a group on the website Reddit, another ‘normal’ AI was trained on more benign images of cats, birds and people, as a part of the same study. After exposure to the images, both Norman and the regular AI who can interpret pictures were then shown inkblot drawings and asked what it saw in them.
The AI was subjected to Rorschach test – inkblot tests used by psychologists to better understand and assess a patient’s state of mind, i.e., whether he or she perceives the world in a negative or positive light.
While Norman’s view was continuously depressing, as he saw murder and violence in every image, on the other hand, the normal AI saw far more cheerful images in the same abstract images.
For instance, what the “normal” AI called “a group of birds sitting on top of a tree branch,” Norman saw as a man being electrocuted.
Or, when shown an abstract shape, where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.
In another one, where a normal AI saw “a black and white photo of a small bird,” Norman saw “man gets pulled into dough machine.”
Professor Iyad Rahwan, one of the three researchers who developed Norman told BBC, told, “Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”
According to the researchers, Norman represents a case study on the dangers of Artificial Intelligence gone wrong when biased and faulty data is used in machine learning algorithms. As a result, the AI is biased when trying to understand real-life situations. Hence, it would be wrong to blame the algorithm, as the training dataset used to create the AI is equally important. In simpler words, if AI is trained on bad data, it will itself turn bad.

Last year, a report claimed that custody decisions in U.S. courts were biased against black people, as the computer program used by the court had flaws in the training data. 

Courtesy of Techworm
Publication date of original article: 05/06/2018
URL of this page :


Tags: Artificial intelligence - AIMITNormanUSA

Print this page
Print this page
Send this page
Send this page

 All Tlaxcala pages are protected under Copyleft.