PROLOGUE: ELIZA was an early natural language processing computer program created from 1964 to 1966 [...] by Joseph Weizenbaum. ... Weizenbaum regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary. Many academics believed that the program would be able to positively influence the lives of many people, particularly those suffering from psychological issues, and that it could aid doctors working on such patients' treatment. While ELIZA was capable of engaging in discourse, ELIZA could not converse with true understanding. However, many early users were convinced of ELIZA’s intelligence and understanding, despite Weizenbaum’s insistence to the contrary...
- - -
In bot we trust? People put more faith in computer algorithms than other humans
https://www.studyfinds.org/people-trust-...er-humans/
EXCERPTS: Do you find yourself reaching for the calculator, even for the really simple math problems? [...] Despite fear over how intrusive these algorithms are becoming, a new study finds people are actually more willing to trust a computer than their fellow man. ... it’s not just the “heavy lifting” humans are running to computers for help with. From choosing the next song in the playlist to finding better fitting pants, algorithms are making more and more of the daily decisions in people’s lives — whether they realize it or not.
“Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,” says Eric Bogert, a Ph.D. student in the Terry College of Business Department of Management Information Systems, in a university release. “It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.”
Researchers evaluated the responses of 1,500 individuals tasked with counting the people in a series of photographs. The team also supplied participants with suggestions on how to do this, generated either by other people or computer algorithms. As the crowd in the photos got bigger and more difficult to count, volunteers were more likely to turn to the computer’s suggestions rather than go with their own gut or the “wisdom of the crowd.”
[...] The UGA team adds these tasks are also the kind of problems humans expect computers to be good at solving. “This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” Schecter explains. “One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.”
Replacing human biases with computer biases? Just because a computer program is merely a pile of data doesn’t mean biases don’t exist. Schecter notes facial recognition and hiring algorithms have both been criticized recently over cultural biases built into their programs. These can lead to inaccuracies when matching faces to identities or screening qualified job candidates. Although simple counting tasks won’t display bias, researchers caution it’s important to know how machines arrive at more complex decisions... (MORE - details)
- - -
In bot we trust? People put more faith in computer algorithms than other humans
https://www.studyfinds.org/people-trust-...er-humans/
EXCERPTS: Do you find yourself reaching for the calculator, even for the really simple math problems? [...] Despite fear over how intrusive these algorithms are becoming, a new study finds people are actually more willing to trust a computer than their fellow man. ... it’s not just the “heavy lifting” humans are running to computers for help with. From choosing the next song in the playlist to finding better fitting pants, algorithms are making more and more of the daily decisions in people’s lives — whether they realize it or not.
“Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,” says Eric Bogert, a Ph.D. student in the Terry College of Business Department of Management Information Systems, in a university release. “It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.”
Researchers evaluated the responses of 1,500 individuals tasked with counting the people in a series of photographs. The team also supplied participants with suggestions on how to do this, generated either by other people or computer algorithms. As the crowd in the photos got bigger and more difficult to count, volunteers were more likely to turn to the computer’s suggestions rather than go with their own gut or the “wisdom of the crowd.”
[...] The UGA team adds these tasks are also the kind of problems humans expect computers to be good at solving. “This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” Schecter explains. “One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.”
Replacing human biases with computer biases? Just because a computer program is merely a pile of data doesn’t mean biases don’t exist. Schecter notes facial recognition and hiring algorithms have both been criticized recently over cultural biases built into their programs. These can lead to inaccuracies when matching faces to identities or screening qualified job candidates. Although simple counting tasks won’t display bias, researchers caution it’s important to know how machines arrive at more complex decisions... (MORE - details)