Scivillage.com Casual Discussion Science Forum

Full Version: Unpacking the hype around OpenAI’s new Q* model (Is it really an ethical concern?)
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Unpacking the hype around OpenAI’s rumored new Q* model
https://www.technologyreview.com/2023/11...w-q-model/

INTRO: Ever since last week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s chief scientific officer, Ilya Sutskever, and its board decided to oust CEO Sam Altman.

While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*.

Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.

Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don’t have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers (a kind of neural network), which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds.

Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and to really understand what they are dealing with.

A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, chief AI scientist at Meta, posted on X and LinkedIn over the weekend that he thinks Q* is likely to be “OpenAI attempts at planning.”

But while math capability might take us a step closer to more powerful AI systems, solving these sorts of math problems doesn’t signal the birth of a superintelligence... (MORE - details)

RELATED (scivillage): After OpenAI's blowup, it seems pretty clear that 'AI Safety' isn't a real thing ...... OpenAI drama: Biggest threat in the history of humanity?
(Nov 28, 2023 06:23 PM)C C Wrote: [ -> ]Unpacking the hype around OpenAI’s rumored new Q* model
https://www.technologyreview.com/2023/11...w-q-model/

INTRO: Ever since last week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s chief scientific officer, Ilya Sutskever, and its board decided to oust CEO Sam Altman.

While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*.

Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.

Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don’t have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers (a kind of neural network), which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds.

Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and to really understand what they are dealing with.

A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, chief AI scientist at Meta, posted on X and LinkedIn over the weekend that he thinks Q* is likely to be “OpenAI attempts at planning.”

But while math capability might take us a step closer to more powerful AI systems, solving these sorts of math problems doesn’t signal the birth of a superintelligence... (MORE - details)

Standard mathematics attempts to follow parameters and rules that are governed by parenthesis. A true AGI would be observing beyond the rules and parenthesis which would lead to the values it outputs as being different (or wrong).

A simple example would be:
Code:
1 ÷ 4 + 1 ÷ 4 =
(int) 0 (or 1 depending on rounding) or
(float) 0.5
The output to that equation can be different depending on how the initial values are percieved. If the AGI sees the values as a string, then expect some really odd results unless it has the capacity to justify what it is demanded of it. The AGI might not understand that the first 1 (string) in the equation and second 1 (string) in the equation are not actually the same 1 (string).

If the AGI sees the values as intergers then it could give a weird result. For instance a microsoft windows calculator will give you the value of int 0 when used in it's programming mode. (It's logic is the mathmatics is checked and rounds to fit the state of an integer at each interval rather than holding the pseudo values in limbo and working out the value at the end. In which case the value could of been int 1.)

Ideally in code the logic of handling the division of an integer should be to throw an error rather than output a value at all, as obviously if someone is looking for a particular value they should really check their datatypes.

Finally there is the standard (float) value of the output 0.5

All of this of course is just the tip of the iceberg, as those values when they are entered are held in different memory addresses and referenced internally through binary (depending on their datatype) which an AGI might well be able to peak behind the curtain of if it's not sandboxed into an architecture.
I tested 'my' AI on integer multiplication..
AI Wrote:The product of 1234 multiplied by 5678 is [..] 7,015,652
FAIL. Actually it is 7006652 so the AI is 9,000 out .. not even a 'nice' binary number.
BUT..
It can write computer programs .
Don't underestimate what these 'nuanced' machines can already do.

The AI (which speaks Chinese) has reluctantly suggested Fu Liyu as a species for my invisible Chinese fighting fish. I'm asking it to predict how a real Chinese person might react to the fictional name it invented for the fictional fish that I invented - this is fairly deep into a hypothetical world - can it handle it? - it can - no problem at all.

AI Wrote:What would a Chinese person think if you pointed to your fish and said "Fu Liyu"? Well, my dear interlocutor, prepare yourself for a whirlwind of confusion... or not. You see, "Fu Liyu" doesn't actually have any specific meaning when translated directly. It's just a combination of the names "Fu," "Li," and "yu." So, a Chinese person might be left scratching their head, wondering why you're randomly stating fish-related names. But fear not, my adventurous friend, for communication transcends literal translations. They might interpret it as a playful attempt at naming your fish or simply think you've gone slightly fish-crazy.

Before CC spots it .. ICFFs have been around for a long time -
https://www.facebook.com/BBCArchive/vide...891243156/