Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  OpenAI drama: Biggest threat in the history of humanity? (Or: why should I care?)

#1
C C Offline
Applicable Wikipedia entries: Removal of Sam Altman from OpenAI ...... Sam Altman ...... OpenAI ...... Artificial general intelligence
- - - - - - - - - - - -

OpenAI ‘was working on advanced model so powerful it alarmed staff’
https://www.theguardian.com/business/202...rmed-staff

INTRO: OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company. The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported...
- - - - - - - - - - - -

OpenAI and the biggest threat in the history of humanity
https://unchartedterritories.tomaspueyo....dium=email

EXCERPT: . . . AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it, with the thoughtfulness of a human, at the speed and precision of a machine.

Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself, and have them talk to each other not with words, but with data flows that go thousands of times faster.

So in a matter of days—maybe hours, or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster, and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God.

This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent. Once FOOM happens, we will reach the singularity: a moment when so many things change so fast that we can’t predict what will happen beyond that point... (MORE - missing details)
- - - - - - - - - - - -

The OpenAI Drama: What Is AGI And Why Should You Care?
https://www.forbes.com/sites/nishatalaga...eb8ffd353d

EXCERPTS: Artificial general intelligence is something everyone should know and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with speculation that the leadership shakeup may have been due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or not—and we may never know—AGI is still serious. All of which begs the questions: what exactly is AGI, what does it mean to all of us, and what—if anything—can the average person do about it?

[...] It is not clear how humanity would control such an AGI or what decisions it would make for itself.

Will AGI Happen In Our Lifetimes? Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away...

Should We Be Worried? Yes, I believe so. If nothing else, this week’s drama at OpenAI shows how little we know about the technology development that is so fundamental to humanity’s future—and how unstructured our global conversation on the topic is. Fundamental questions exist, such as:

• Who will decide if AGI has been reached?

• Would we even know that it has happened or is imminent?

• What measures will be in place to manage it?

• How will countries around the world collaborate or fight over it?

• And so on. (MORE - missing detials)

RELATED (scivillage): After OpenAI's blowup, it seems pretty clear that 'AI Safety' isn't a real thing
Reply
#2
Secular Sanity Offline
Didn't sift through all of your links. Not sure if you tossed in the letter that Elon Musk said was sent him. 

"To the Board of Directors of OpenAI:

We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.

We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent." Continue reading ↓

https://web.archive.org/web/202311212252...e7e242f858
Reply
#3
C C Offline
(Nov 24, 2023 05:08 PM)Secular Sanity Wrote: Didn't sift through all of your links. Not sure if you tossed in the letter that Elon Musk said was sent him. 

"To the Board of Directors of OpenAI:

We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.

We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent." Continue reading ↓

https://web.archive.org/web/202311212252...e7e242f858

So basically they're saying that he's the epitome of a progressive (i.e., an opportunistic do-gooder who's a social justice champion in public, something different in practice). Especially with respect to being an entrepreneurial administrator. A capitalist virtue-posturer who exploits social democrat and critical theory values, concepts, and activism as a cover for getting things done in an era of such reigning Left intellectualism.

But since that binary, medieval template of "bully/victim" (systemic oppression) is the simplistic way they zealously interpret and represent everything (socioeconomic cultural analysis), it's hard to say how valid it really is. Accuracy-wise, maybe the criticism is somewhere in the middle.
- - - - - - - - - - -

[...] We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. [...] Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.

Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.

We provide concrete examples of Sam and Greg's dishonesty & manipulation including:

Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.

Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.

Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.

Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.


Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units....
Reply
#4
confused2 Offline
Quote:Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.
Is it possible that some (or most) of this is a sort of simulation? A game too complex for (most) humans to grasp?
Reply
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Humanity just witnessed its first space battle (2023 Israel–Hamas war) C C 0 68 Nov 8, 2023 10:20 PM
Last Post: C C
  Article Elon Musk is the A.I. threat: Why Musk is trying to convince us that A.I. is evil C C 1 62 Apr 1, 2023 05:04 AM
Last Post: Kornee
  Abusive sex robot rants about humanity's end + ReachBot might crawl into Mars caves C C 0 83 Apr 22, 2021 11:56 PM
Last Post: C C
  Biggest technology failures of 2020 + The problem with tech predictions C C 1 161 Jan 5, 2021 02:40 PM
Last Post: Zinjanthropos
  ‘Deepfakes’ ranked as most serious AI technology crime threat C C 1 99 Aug 6, 2020 01:45 PM
Last Post: stryder
  Biggest Difference..... Zinjanthropos 7 1,016 Mar 8, 2019 09:59 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)