Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

What Frankenstein's creature can really tell us about AI

#1
C C Offline
https://aeon.co/essays/what-frankenstein...s-about-ai

EXCERPT: . . . The Google engineer François Chollet argued in his article ‘The Impossibility of Intelligence Explosion’ that to understand what artificial intelligence is, we need to grasp that all intelligence is ‘fundamentally situational’. An individual human’s intelligence manifests solving the problems associated with processing her experiences of being human. Likewise, a particular computer algorithm’s intelligence concerns solving the problems associated with applying that algorithm to analyse the data fed into it. Intelligence – whether construed as natural or artificial – is adaptive to a situation. [-->A reply to Francois Chollet on intelligence explosion]

Chollet reminds us, too, that people are a product of their own tools. Akin to how early hominins used fire or etched seashells, modern humans have used pens, printing presses, books and computers to process data and solve problems related to their particular circumstances. Running parallel to the insights of anthropologists such as Agustín Fuentes at the University of Notre Dame in Indiana and Marc Kissel at the Appalachian State University in North Carolina, Chollet sums up the human condition: ‘Most of our intelligence is not in our brain, it is externalised in our civilisation.’

Science and technology are two defining artefacts of modern human civilisation. The fact that humans now use them to make intelligences for further problem solving is simply one more iteration of what Fuentes in The Creative Spark (2017) called humanity’s process of creative interface with its environment. From this long view of humanity, anthropology shows that civilisation itself is a kind of AI: a collective set of tools developed over time and through cultures, equipping people to learn from the past for the benefit of life in myriad forms, present and future.

[...] Believers in the singularity often cite the wisdom of the late English physicist Stephen Hawking. [...] In a speech in November 2017, Hawking stated: ‘AI could be the worst event in the history of our civilisation.’ Not so fast. If you listen to the whole of Hawking’s keynote at the 2017 Web Summit in Lisbon, you’ll hear him stress – like a good logician – the conditional quality of the verb ‘could’. AI could be good, bad or neutral for humanity.

The consequences of AI are fundamentally unknowable beforehand. ‘We just don’t know,’ Hawking vocalised through a text-to-speech device triggered by facial twitches, ‘we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.’ Writing online soon thereafter, Chollet counselled that the prediction of an imminent ‘intelligence explosion’ was overblown, and that any growth of AI would continue to be linear not exponential in pace.

Hawking did not reference Frankenstein, but his speech resonated with the book’s philosophical themes. Like all great literature, Frankenstein resists reduction to simplistic moralism, such as the danger of playing God through science. Shelley’s novel rather functions as a kind of test of the reader’s cognitive and emotional intelligence. The reward of reading it is putting the pieces together to see the whole.

To crack the ethical puzzle of Frankenstein, it helps to recall its theological background. The use of the word ‘super-intelligence’ dates to late-17th-century Quaker reflections on the nature of God. It featured in British theological debates during Shelley’s youth. Shelley described the creature as ‘superhuman’ in speed. This speed was not simply physical. His cognitive and affective development after his assembly, animation and abandonment by Frankenstein was far more rapid than that of humans. [...] The creature is a superintelligence. [...]

[...] Shelley leads them [readers] to empathise with artificial intelligence. The creature’s process of artificial formation begins with his animation without a mother. His life enacts the educational theories of John Locke (and Shelley’s father William Godwin) which Shelley read compulsively in the 1810s. [...] Although the creature lacks a mother, he has the same contextual and interactive process of development as other children. As with Frankenstein’s creature, AI is not born, but it is still made by circumstances.

[...] the creature [...] meets six criteria for deep learning: he learns to recognise both (1) faces and (2) speech patterns in the De Lacey family; (3) he translates languages: at least Felix’s French and perhaps Safie’s Arabic, if not also Milton’s English, Goethe’s German, and Plutarch’s Greek (or Latin); (4) he reads handwriting in his father’s laboratory journal; (5) he plays strategic games [...] (6) he controls robotic prostheses, given that his body – assembled from parts of human and other animal corpses – is a kind of humanoid construction of chemistry, medicine and electricity.

Since the real world is the world of trial and error, AIs – much like the creature – might be capable of learning deeply but not well. AIs both learn and mislearn through storytelling. If its programming is faulty, a computer will not process data correctly. If its data is bad, it will produce a false analysis.

[...] Theorists of AI return to Frankenstein as Shelley and Verney returned to Rome, to pay homage to the artifice of human intelligence [...] Like the creature, AI can be a monster, or the victim of them. [...] the future of artificial intelligence will be conceived from what we have learned from our cultural past....

MORE: https://aeon.co/essays/what-frankenstein...s-about-ai
Reply




Users browsing this thread: 1 Guest(s)