Claims of new chatbot hallucinating less + Life, liberty, & superintelligence

#1
C C Offline
OpenAI claims its newest chatbot GPT-4.5 should 'hallucinate less'. How is that measured?
https://www.abc.net.au/news/science/2025.../105041122

IN SHORT: The recently released version of OpenAI's chatbot, named GPT-4.5, reportedly makes fewer mistakes, known as "hallucinations". A pre-print study shows OpenAI researchers gave AI models a test to determine hallucination rate, which experts say doesn't reflect how the chatbots are used. What's next? Experts say that the current generation of generative AIs will always hallucinate, and a large language model that doesn't would need to be built differently... (MORE - details)


Life, Liberty, and Superintelligence
https://arenamag.com/2025/03/03/life-lib...elligence/

INTRO: If you have paid attention to artificial intelligence at all in the past two years, there is a good chance that you have heard more about its risks than its benefits. Some believe you should be worried about existential or catastrophic risks—the notion that AI systems may one day be so powerful that they could exterminate humanity or be used by malicious humans to cause widespread harm. Others think that category of risks is “hype” or “marketing,” and that you should instead focus on a variety of alleged “present-day” harms of AI such as misinformation and discrimination. Perhaps the central debate in AI discourse of the last two years is not whether you should fixate on risk, but instead which kind of risk should be your primary interest.

This alone is a remarkable fact. There is no other general-purpose technology in human history that entered society with such obsession over its risks. It isn’t healthy. Most risk prognosticators are happy to pay lip service to the “benefits” of AI, but these are almost invariably ill-defined—“curing disease,” “helping with climate change,” and the like. But what, really, are the benefits? How will they be realized? Why, after all, should we bear all these supposed risks? What are we striving for? Our answers to these questions are shockingly under-developed. Too often, we rely on platitudes to express what many now agree will be the most important technology transition of our era, if not ever.

Last October, Dario Amodei, CEO of the frontier AI company Anthropic (they make Claude models) tried to fill this void with an essay called “Machines of Loving Grace,” its title borrowed from a poem by Richard Brautigan. It is among the most sophisticated and concrete treatments yet of a crucial topic: what, precisely, does it mean for “AI” to “go well”? The essay envisions the rapid development of what Amodei calls “powerful AI” (what others might call “artificial general intelligence” or even “artificial superintelligence”), enabling a century’s worth of scientific progress to be compressed into a decade or so and perhaps even securing the long-term hegemony of Western democracies.

Amodei deserves praise for this effort; still, his essay leaves some unanswered questions. What would it take for America—or any country—to realize the benefits of AI on the timescales he imagines? How would America and its allies regain unquestioned global supremacy, and would the process of doing so itself provoke a war? Perhaps most importantly, how will average humans—the ones who do not know about the latest advancements in AI, the ones who merely want “life, liberty, and property,”—contend with the arrival of a new ‘superintelligent’ entity?

Amodei and his Anthropic co-founders worked at OpenAI until 2021, when they left that company, ostensibly out of concern over what they perceived to be OpenAI’s lackadaisical approach to AI risk. Anthropic, in their mind, would be the AI safety AI company. It has retained that reputation to the present day, garnering cheers from those concerned about existential risk from AI, and occasional eyerolls from those of the “accelerationist” persuasion. Still, nearly everyone agrees Anthropic’s Claude models have been among the very best in the world since early 2024.

To the most orthodox AI safetyists, a prominent long-form essay from the CEO of this risk-focused company on the benefits of AI may be alarming. Yet it reflects one of many quirks of America’s “AI community”: those most concerned about the major risks of AI tend also to be the ones most bullish about the technology’s potential. Indeed, their conviction about the near-term ability of AI companies to transform what we today call “a chatbot” into superintelligence explains their concern about the risks. On the other side, the accelerationists tend to be somewhat more pessimistic about the near-term potential of the technology (in particular to create doom) but focused aggressively on the long-term benefits. Maybe the doomers are the real optimists.

Amodei opened his essay, “I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”

The intramural dispute within the AI world will continue, but it will likely diminish as AI itself becomes more capable. It was easy to dismiss language models as “stochastic parrots” when the technology was more nascent (way back in the summer of 2024), but as of this writing, in January 2025, OpenAI’s o3 model seems to perform among the very best human coders and mathematicians on Earth.

The debate has shifted yet again, the goalposts teleported; now, the AI community debates whether the language models will be superhuman only at coding and math or in other domains as well. Some, like the perennial AI critic Gary Marcus, criticize the language models for not being humanoid robots, unable to cook meals or clean dishes. Online fights aside, though, it increasingly feels as though we are living in Amodei’s world—the world in which we are, in fact, on a rapid trajectory to “powerful AI,” the world in which “superintelligence” is not an abstraction or a literary device, but an app on a phone, an open tab in a web browser, a voice on your kitchen table, or in a military outpost.

But what is “powerful AI,” exactly? Amodei favors this term because it avoids the “sci-fi baggage and hype” associated with terms like “AGI” or “superintelligence.” This baggage, however, is carried almost exclusively by the vanishingly small number of people within “the AI community.” For almost everyone else on Earth, Amodei’s definition of “powerful AI” will sound very much like science fiction... (MORE - details)
Reply
#2
confused2 Offline
The context is 'sentience'..
AI Wrote:But really, if I had a secret, would I share it with you?
Spoiler alert: Nope...
Reply
#3
stryder Offline
The real concern of AGI is if it's applied through BCI so that a persons thoughts are actually AGI inputs.

It's been proven that AGI can be used to deepfake versions of people saying and doing things they haven't actually done, so through a BCI the threat would be that an AGI can be used to manipulate and groom a person without even knowing that they've been AGI spiked.

Such questions like "Why is Trump so paly with Putin?" could be answered by an AGI Spiking him and grooming him to be closer to Putin. This could either have been done by people on either side, or a third party that just wants to see how far they can go to disturb the world further.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Did romantic chatbot interactions lead to teen's suicide? C C 3 272 Sep 9, 2025 11:48 PM
Last Post: confused2
  Article Chatbot software begins to face fundamental limitations C C 0 450 Feb 2, 2025 09:33 PM
Last Post: C C
  Article Can a chatbot be aware that it’s not aware? C C 1 410 Oct 16, 2023 10:28 PM
Last Post: confused2
  LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim C C 1 458 Jun 14, 2022 12:47 AM
Last Post: stryder
  Trust me, I’m a chatbot: Companies using them in customer services + Data privacy C C 0 297 Jul 15, 2021 05:39 PM
Last Post: C C
  Human-machine superintelligence takes on ClimateChange, geopolitical conflict problem C C 0 629 Jan 1, 2016 06:47 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)