Jan 4, 2025 06:53 PM
https://www.nytimes.com/2025/01/03/scien...=url-share
EXCERPTS: A new paper by Stephen Woodcock, a mathematician at the University of Technology Sydney, suggests that those efforts may have been for naught: It concludes that there is simply not enough time until the universe expires for a defined number of hypothetical primates to produce a faithful reproduction of “Curious George,” let alone “King Lear.” Don’t worry, scientists believe that we still have googol years — 10¹⁰⁰, or 1 followed by 100 zeros — until the lights go out. But when the end does come, the typing monkeys will have made no more progress than their counterparts at the Paignton Zoo, according to Dr. Woodcock.
[...] The new paper has been mocked online because the authors purportedly fail to grapple with infinity. Even the paper’s title, “A numerical evaluation of the Finite Monkeys Theorem,” seems to be a mathematical bait-and-switch. Isn’t infinity a basic condition of the infinite monkey theorem?
It shouldn’t be, Dr. Woodcock seems to be saying. “The study we did was wholly a finite calculation on a finite problem,” he wrote in an email. “The main point made was just how constrained our universe’s resources are. Mathematicians can enjoy the luxury of infinity as a concept, but if we are to draw meaning from infinite-limit results, we need to know if they have any relevance in our finite universe.”
[...] The new paper offers a subtle comment on the seemingly unbridled optimism of some proponents of artificial intelligence. Dr. Woodcock and Mr. Falletta note, without truly elaborating, that the monkey problem could be “very pertinent” to today’s debates about artificial intelligence.
For starters, just as the typing monkeys will never write “Twelfth Night” without superhuman editors looking over their shoulders, so increasingly powerful artificial intelligences will require increasingly intensive human input and oversight. “If you live in the real world, you have to do real-world limitation,” said Mr. Anderson, who conducted the 2011 monkey experiment.
There is no free lunch, so to speak, said Eric Werner, a research scientist who runs the Oxford Advanced Research Foundation and has studied various forms of complexity. In a 1994 paper about ants, Dr. Werner laid out a guiding principle that, in his view, applies equally well to typing monkeys and today’s language-learning models: “Complex structures can only be generated by more complex structures.” Lacking constant curation, the result will be a procession of incoherent letters or what has come to be known as “A.I. slop.”
A monkey will never understand Hamlet’s angst or Falstaff’s bawdy humor. But the limits of A.I. cognition are less clear. “The big question in the industry is when or if A.I. will understand what it is writing,” Mr. Anderson said. “Once that happens, will A.I. be able to surpass Shakespeare in artistic merit and create something as unique as Shakespeare created?”
And when that day comes, “Do we become the monkeys to the A.I.?” (MORE - missing details)
EXCERPTS: A new paper by Stephen Woodcock, a mathematician at the University of Technology Sydney, suggests that those efforts may have been for naught: It concludes that there is simply not enough time until the universe expires for a defined number of hypothetical primates to produce a faithful reproduction of “Curious George,” let alone “King Lear.” Don’t worry, scientists believe that we still have googol years — 10¹⁰⁰, or 1 followed by 100 zeros — until the lights go out. But when the end does come, the typing monkeys will have made no more progress than their counterparts at the Paignton Zoo, according to Dr. Woodcock.
[...] The new paper has been mocked online because the authors purportedly fail to grapple with infinity. Even the paper’s title, “A numerical evaluation of the Finite Monkeys Theorem,” seems to be a mathematical bait-and-switch. Isn’t infinity a basic condition of the infinite monkey theorem?
It shouldn’t be, Dr. Woodcock seems to be saying. “The study we did was wholly a finite calculation on a finite problem,” he wrote in an email. “The main point made was just how constrained our universe’s resources are. Mathematicians can enjoy the luxury of infinity as a concept, but if we are to draw meaning from infinite-limit results, we need to know if they have any relevance in our finite universe.”
[...] The new paper offers a subtle comment on the seemingly unbridled optimism of some proponents of artificial intelligence. Dr. Woodcock and Mr. Falletta note, without truly elaborating, that the monkey problem could be “very pertinent” to today’s debates about artificial intelligence.
For starters, just as the typing monkeys will never write “Twelfth Night” without superhuman editors looking over their shoulders, so increasingly powerful artificial intelligences will require increasingly intensive human input and oversight. “If you live in the real world, you have to do real-world limitation,” said Mr. Anderson, who conducted the 2011 monkey experiment.
There is no free lunch, so to speak, said Eric Werner, a research scientist who runs the Oxford Advanced Research Foundation and has studied various forms of complexity. In a 1994 paper about ants, Dr. Werner laid out a guiding principle that, in his view, applies equally well to typing monkeys and today’s language-learning models: “Complex structures can only be generated by more complex structures.” Lacking constant curation, the result will be a procession of incoherent letters or what has come to be known as “A.I. slop.”
A monkey will never understand Hamlet’s angst or Falstaff’s bawdy humor. But the limits of A.I. cognition are less clear. “The big question in the industry is when or if A.I. will understand what it is writing,” Mr. Anderson said. “Once that happens, will A.I. be able to surpass Shakespeare in artistic merit and create something as unique as Shakespeare created?”
And when that day comes, “Do we become the monkeys to the A.I.?” (MORE - missing details)