Scivillage.com Casual Discussion Science Forum

Full Version: An earlier AI was modeled on a psychopath. Biased algorithms are still a major issue.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://www.abc.net.au/news/science/2023.../102878458

INTRO: It started as an April Fools' Day prank. On April 1, 2018, researchers from the Massachusetts Institute of Technology (MIT) Media Lab, in the United States, unleashed an artificial intelligence (AI) named Norman.

Within months Norman, named for the murderous hotel owner in Robert Bloch's — and Alfred Hitchcock's — Psycho, began making headlines as the world's first "psychopath AI."

But Pinar Yanardag and her colleagues at MIT hadn't built Norman to spark global panic. It was supposed to be an experiment designed to show one of AI's most pressing issues: how biased training data can affect the technology's output.

Five years later, the lessons from the Norman experiment have lingered longer than its creators ever thought they would.

"Norman still haunts me every year, particularly during my generative AI class," Dr Yanardag said. "The extreme outputs and provocative essence of Norman consistently sparks captivating classroom conversations, delving into the ethical challenges and trade-offs that arise in AI development."

The rise of free-to-use generative AI apps like ChatGPT, and image generation tools such as Stable Diffusion and Midjourney, has seen the public increasingly confronted by the problems of inherent bias in AI.

For instance, recent research showed that when ChatGPT was asked to describe what an economic professor or a CEO looks like, its responses were gender-biased – it answered in ways that suggested these roles were only performed by men.

Other types of AI are being used across a broad range of industries. Companies are using it to filter through resumes, speeding up the recruitment process. Bias might creep in there, too.

Hospitals and clinics are also looking at ways to incorporate AI as a diagnostic tool to search for abnormalities in CT scans and mammograms or to guide health decisions. Again, bias has crept in.

The problem is the data used to train AI contains the same biases we encounter in the real world, which can lead to a discriminatory AI with real-world consequences. Norman might have started as a joke but in reality, it was a warning... (MORE - details)

Meet 'Norman', World’s First Psychopathic AI ... https://youtu.be/2MG7gGQvaG0