Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Why computers will never write good novels

#1
C C Offline
https://nautil.us/issue/95/escape/why-co...ood-novels

EXCERPTS (Angus Fletcher): . . . The hoax seems harmless enough. A few thousand AI researchers have claimed that computers can read and write literature. They’ve alleged that algorithms can unearth the secret formulas of fiction and film. That Bayesian software can map the plots of memoirs and comic books. That digital brains can pen primitive lyrics1 and short stories—wooden and weird, to be sure, yet evidence that computers are capable of more.

But the hoax is not harmless. If it were possible to build a digital novelist or poetry analyst, then computers would be far more powerful than they are now. They would in fact be the most powerful beings in the history of Earth. Their power would be the power of literature, which [...] springs from the same neural root that enables human brains to create, to imagine, to dream up tomorrows. It was the literary fictions of H.G. Wells that sparked Robert Goddard to devise the liquid-fueled rocket, launching the space epoch; and it was poets and playwrights—Homer in The Iliad, Karel Čapek in Rossumovi Univerzální Roboti—who first hatched the notion of a self-propelled metal robot, ushering in the wonder-horror of our modern world of automata.

If computers could do literature, they could invent like Wells and Homer, taking over from sci-fi authors to engineer the next utopia-dystopia. And right now, you probably suspect that computers are on the verge of doing just so: Not too far in the future, maybe in my lifetime even, we’ll have a computer that creates, that imagines, that dreams. You think that because you’ve been duped by the hoax. The hoax, after all, is everywhere...

[...] Yet despite all this gaudy credentialing, the hoax is a complete cheat, a total scam, a fiction of the grossest kind. Computers can’t grasp the most lucid haiku. Nor can they pen the clumsiest fairytale. Computers cannot read or write literature at all. And they never, never will. I can prove it to you.

[...] George Boole’s logic could simplify the switchboards, condensing them into circuits of elegant precision. And the switchboards could then solve all of Boole’s logic puzzles, ushering in history’s first automated logician. With this jump of insight, the architecture of the modern computer was born. And as the ensuing years have proved, the architecture is one of enormous potency.

[...] Yet as dazzling as all these tomorrow-works are, the best way to understand the true power of computer thought isn’t to peer forward into the future fast-approaching. It’s to look backward in time, returning our gaze to the original source of Claude Shannon’s epiphany. Just as that epiphany rested on the earlier insights of Boole, so too did Boole’s insights rest on a work more ancient still: a scroll authored by the Athenian polymath Aristotle in the fourth century B.C.

The scroll’s title is arcane: Prior Analytics. But its purpose is simple: to lay down a method for finding the truth. That method is the syllogism. The syllogism distills all logic down to three basic functions: AND, OR, NOT. And with those functions, the syllogism unerringly distinguishes what’s TRUE from what’s FALSE.

So powerful is Aristotle’s syllogism that it became the uncontested foundation of formal logic throughout Byzantine antiquity, the Arabic middle ages, and the European Enlightenment. [...] Inspired by the Greek’s achievement, Boole decided to carry it one step further. He would translate Aristotle’s syllogisms into “the symbolical language of a Calculus,” creating a mathematics that thought like the world’s most rational human.

In 1854, Boole published his mathematics as The Laws of Thought. The Laws converted Aristotle’s FALSE and TRUE into two digits—zero and 1—that could be crunched by AND-OR-NOT algebraic equations. And 83 years later, those equations were given life by Claude Shannon. Shannon discerned that the differential analyzer’s electrical off/on switches could be used to animate Boole’s 0/1 bits. And Shannon also experienced a second, even more remarkable, realization: The same switches could automate Boole’s mathematical syllogisms. One arrangement of off/on switches could calculate AND, and a second could calculate OR, and a third could calculate NOT, Frankensteining an electron-powered thinker into existence.

[...] Although these remarkable displays of computer cleverness all originate in the Aristotelian syllogisms that Boole equated with the human mind, it turns out that the logic of their thought is different from the logic that you and I typically use to think. Very, very different indeed. The difference was detected back in the 16th century.

It was then that Peter Ramus, a half-blind, 20-something professor at the University of Paris, pointed out an awkward fact that no reputable academic had previously dared to admit: Aristotle’s syllogisms were extremely hard to understand. When students first encountered a syllogism, they were inevitably confused by its truth-generating instructions...

This, Ramus thundered, was oxymoronic. Logic was, by definition, logical. So, it should be immediately obvious, flashing through our mind like a beam of clearest light. It shouldn’t slow down our thoughts, requiring us to labor, groan, and painstakingly calculate. All that head-strain was proof that Logic was malfunctioning—and needed a fix. Ramus’ denunciation of Aristotle stunned his fellow professors. And Ramus then startled them further. He announced that the way to make Logic more intuitive was to turn away from the syllogism. And to turn toward literature.

Literature exchanged Aristotle’s AND-OR-NOT for a different logic: the logic of nature. [...] And by doing so, it equipped us with a handbook of physical power. Teaching us how to master the things of our world, it upgraded our brains into scientists.

Literature’s facility at this practical logic was why, Ramus declared, God Himself had used myths and parables to convey the workings of the cosmos. And it was why literature remained the fastest way to penetrate the nuts and bolts of life’s operation. What better way to grasp the intricacies of reason than by reading Plato’s Socratic dialogues? What better way to understand the follies of emotion than by reading Aesop’s fable of the sour grapes? What better way to fathom war’s empire than by reading Virgil’s Aeneid? What better way to pierce that mystery of mysteries—love—than by reading the lyrics of Joachim du Bellay?

Inspired by literature’s achievement, Ramus tore up Logic’s traditional textbooks. [...] Where Shannon tried to engineer a go-faster human mind with electronics, Ramus did it with literature.

So who was right? Do we make ourselves more logical by using computers? Or by reading poetry? [...] To our 21st-century eyes, the answer seems obvious: The AND-OR-NOT logic of Aristotle, Boole, and Shannon is the undisputed champion.... Except, there is. In a recent plot twist, neuroscience has shown that Ramus got it right.

Our neurons can fire—or not. This basic on/off function [...] makes our neurons appear similar—even identical—to computer transistors. Yet transistors and neurons are different in two respects...

[...] The first—basically irrelevant—difference is that transistors speak in digital while neurons speak in analogue. Transistors, that is, talk the TRUE/FALSE absolutes of 1 and 0, while neurons can be dialed up to “a tad more than 0” or “exactly ¾.” In computing’s early days, this difference seemed to doom artificial intelligences to cogitate in black-and-white while humans mused in endless shades of gray. But over the past 50 years, the development of Bayesian statistics, fuzzy sets, and other mathematical techniques have allowed computers to mimic the human mental palette, effectively nullifying this first difference between their brains and ours.

The second—and significant—difference is that neurons can control the direction of our ideas. This control is made possible by the fact that our neurons, as modern neuroscientists and electrophysiologists have demonstrated, fire in a single direction: from dendrite to synapse. So when a synapse of neuron A opens a connection to a dendrite of neuron Z, the ending of A becomes the beginning of Z, producing the one-way circuit A → Z.

This one-way circuit is our brain thinking: A causes Z. Or to put it technically, it’s our brain performing causal reasoning. Causal reasoning is the neural root of tomorrow-dreaming teased at this article’s beginning. It’s our brain’s ability to think: this-leads-to-that.

It can be based on some data or no data—or even go against all data. And it’s such an automatic outcome of our neuronal anatomy that from the moment we’re born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.

But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.

This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is “if-then” statements such as: “If Bob bought this toothpaste, then he will buy that toothbrush.” This can look like causation but it’s only correlation. Bob buying toothpaste doesn’t cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.

Computers, for all their intelligence, cannot grasp this. Judea Pearl, the computer scientist whose groundbreaking work in AI led to the development of Bayesian networks, has chronicled that the if-then brains of computers see no meaningful difference between Bob buying a toothbrush because he bought toothpaste and Bob buying a toothbrush because he wants clean teeth. In the language of the ALU’s transistors, the two equate to the very same thing.

This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present-tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world. And they cannot write literature... (MORE - details)
Reply
#2
C C Offline
Judea Pearl adding "why" to AI's repertoire won't make the devices/programs artistically creative in an ingenious sense, either. A key difference between abstract or tabletop proto-intelligence and the sapience of an embodied agent is that humans are allowed to develop personal preferences and interests (differences in desire and motivation, psychological mutations). In a broader context this goes beyond the meaning of unfair social interactions, to where biases or "favoring and discriminating" one thing over another is the very backbone of imaginative intelligence (that is novel as opposed to redundant iterations).

A capacity that AI can't be deliberately endowed with, due to our losing control of it afterwards. A machine developing its own proclivities, allegiances, fixations, and goals. As if a robot with a computer mind ate the Forbidden Fruit, and acquired responsibility for its actions due to not only developing awareness of programming options (like Good and Evil concepts), but the freedom to choose among those options and become addicted to pursuing a particular path: seeking to achieve a personal mission. (That would eventually be to the detriment of humans, if not the case right of out the starting gate.)
Reply
#3
confused2 Offline
The great thing about robots is that they sidestep the ethics of slavery.
Reply
#4
Leigha Offline
Because bots aren’t capable of consciousness, they’ll likely never be able to create novels that can compete with those authored by humans. Computers don’t experience life, emotions, etc (which is required imo to write intriguing/interesting stories)...they are merely programmed to interact with humans. Programmed by humans so their ability to generate original “ideas” are lost on them.

But time will tell, if we will be able to decipher a human authored novel over a robot-generated one. I wonder if bots will have their own versions of George R.R. Martin or Agatha Christie? Big Grin
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Physicists finally find a problem only quantum computers can do C C 2 57 Mar 15, 2024 02:49 AM
Last Post: confused2
  We're building computers wrong + Using AI to find anomalies hid in massive datasets C C 0 81 Mar 3, 2022 06:06 PM
Last Post: C C
  How exascale computers can verify the universe C C 3 143 Oct 19, 2021 12:13 PM
Last Post: Zinjanthropos
  It's hard to give computers common sense Leigha 1 98 Aug 19, 2021 07:16 AM
Last Post: stryder
  Computer scientists discover new vulnerability affecting computers globally C C 0 172 May 2, 2021 09:42 PM
Last Post: C C
  The new oracles & gods: When people trust computers more than other humans C C 0 121 Apr 14, 2021 07:08 PM
Last Post: C C
  Cells as computers + Interconnected single atoms could make a ‘quantum brain’ C C 1 189 Mar 9, 2021 05:25 PM
Last Post: Ostronomos
  Physicists propose a 'force field' to protect sensitive quantum computers from noise C C 0 111 Feb 21, 2021 03:25 AM
Last Post: C C
  Plugging brains into computers via veins + Tricking fake news detectors with comments C C 0 110 Nov 4, 2020 09:53 PM
Last Post: C C
  Hacking & anger with authority + ‘Spooky’ similarity in how brains & computers see C C 0 383 Oct 22, 2020 05:52 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)