- - - - - - - - - - - -
OpenAI ‘was working on advanced model so powerful it alarmed staff’
https://www.theguardian.com/business/202...rmed-staff
INTRO: OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company. The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported...
- - - - - - - - - - - -
OpenAI and the biggest threat in the history of humanity
https://unchartedterritories.tomaspueyo....dium=email
EXCERPT: . . . AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it, with the thoughtfulness of a human, at the speed and precision of a machine.
Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself, and have them talk to each other not with words, but with data flows that go thousands of times faster.
So in a matter of days—maybe hours, or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster, and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God.
This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent. Once FOOM happens, we will reach the singularity: a moment when so many things change so fast that we can’t predict what will happen beyond that point... (MORE - missing details)
- - - - - - - - - - - -
The OpenAI Drama: What Is AGI And Why Should You Care?
https://www.forbes.com/sites/nishatalaga...eb8ffd353d
EXCERPTS: Artificial general intelligence is something everyone should know and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with speculation that the leadership shakeup may have been due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or not—and we may never know—AGI is still serious. All of which begs the questions: what exactly is AGI, what does it mean to all of us, and what—if anything—can the average person do about it?
[...] It is not clear how humanity would control such an AGI or what decisions it would make for itself.
Will AGI Happen In Our Lifetimes? Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away...
Should We Be Worried? Yes, I believe so. If nothing else, this week’s drama at OpenAI shows how little we know about the technology development that is so fundamental to humanity’s future—and how unstructured our global conversation on the topic is. Fundamental questions exist, such as:
• Who will decide if AGI has been reached?
• Would we even know that it has happened or is imminent?
• What measures will be in place to manage it?
• How will countries around the world collaborate or fight over it?
• And so on. (MORE - missing detials)
RELATED (scivillage): After OpenAI's blowup, it seems pretty clear that 'AI Safety' isn't a real thing