Jan 9, 2024 08:47 PM
(This post was last modified: Jan 10, 2024 03:16 AM by C C.)
GIZMODO: "In essence, botshit is a play on bullshit and refers to the ways in which generative AI can be used to create content that is filled with inaccurate or misleading information. In other words, it’s basically a re-branding of “hallucination,” which is the well-known term that refers to LLMs’ penchant for spouting incorrect or wholly fabricated 'facts'."
- - -- - - - - - - - - - -
Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live
https://www.theguardian.com/commentisfre...-democracy
INTRO: During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”
It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).
The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote... (MORE - details)
Deepfake Hillary video: https://twitter.com/ramble_rants/status/...5266451456
- - - - - - - - - - - -
Beware of botshit: How to manage the epistemic risks of generative chatbots
https://papers.ssrn.com/sol3/papers.cfm?...id=4678265
PAPER'S ABSTRACT: Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses.
This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’.
This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability, and response veracity importance.
The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.
- - -- - - - - - - - - - -
Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live
https://www.theguardian.com/commentisfre...-democracy
INTRO: During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”
It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).
The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote... (MORE - details)
Deepfake Hillary video: https://twitter.com/ramble_rants/status/...5266451456
- - - - - - - - - - - -
Beware of botshit: How to manage the epistemic risks of generative chatbots
https://papers.ssrn.com/sol3/papers.cfm?...id=4678265
PAPER'S ABSTRACT: Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses.
This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’.
This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability, and response veracity importance.
The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.
