Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

How to prevent the coming inhuman slash nonhuman future (via Shakespeare)

#1
C C Offline
https://www.berfrois.com/2022/05/erik-ho...s-forever/

INTRO (excerpts): There are a handful of obvious goals we should have for humanity’s longterm future, but the most ignored is simply making sure that humanity remains human.

It’s what the average person on the street would care about, for sure. And yet it is missed by many of those working on longtermism, who are often effective altruists or rationalists or futurists (or some other label nearby to these), and who instead usually focus on ensuring economic progress, avoiding existential risk, and accelerating technologies like biotechnology and artificial intelligence-ironically, the very technologies that may make us unrecognizably inhuman and bring about our reckoning...

Longtermism gives a moral framework for doing this: it is the view that we should give future lives moral weight, perhaps at a discount, perhaps not; for whether one does or doesn’t discount hypothetical future people (vs. real people) turns out to be rather irrelevant. There are just so many more potential people than actual people, or even historical people who’ve ever lived. Only about ~100 billion humans have walked this Earth, and the number of potential people in the next 1,000 years of history alone might be in the trillions. Therefore, even if you massively discount them, it still turns out that the moral worth of everyone who might live outstrips the moral worth of everyone who currently living. I find this view, taken broadly, convincing.

But what counts as moral worth surely changes across times, and might be very different in the future. That’s why some longtermists seek to “future-proof” ethics. However, whether or not we should lend moral worth to the future is a function of whether or not we find it recognizable, that is, whether or not the future is human or inhuman. This stands as an axiomatic moral principle in its own right, irreducible to other goals of longtermism.

It is axiomatic because as future civilizations depart significantly from baseline humans our abilities to make judgements about good or bad outcomes will become increasingly uncertain, until eventually our current ethical views become incommensurate. What is the murder of an individual to some futuristic brain-wide planetary mind? What is the murder of a digital consciousness that can make infinite copies of itself? Neither are anything at all, not even a sneeze-it is as absurd as applying our ethical notions to lions. Just like Wittgenstein’s example of a talking lion being an oxymoron (since a talking lion would be incomprehensible to us humans), it is oxymoronic to use our current human ethics to to answer ethical questions about inhuman societies. There’s simply nothing interesting we can say about them.

And humanness is an innately moral quality, above and beyond, say, happiness, or pleasure -- it is why Brave New World is a dystopia, despite everyone being inordinately happy due to being pumped full of drugs, they are denied human relationships and human ways of living. It is why the suffering of a primate will always move us more than a suffering of a cephalopod, as proximity to our own way of living increases sympathy...

[...] So, in some ways, keeping humanity human should be as central a pillar to longtermism as minimizing existential risk (the chance of Earth being wiped out in the future), both because of the innate moral value of humanity qua humanity and also because for inhuman futures we cannot make moral judgements regardless.

Although here I should admit that what counts as “human” is awfully subjective. For a (slightly) more objective definition, let’s turn to the greatest chronicler of the human: William Shakespeare. [...] So we can lean on the Bard, and for any hypothetical future apply the “Shakespeare Test,” which asks:

"Are there still aspects of Shakespeare’s work reflected in the future civilization? Conversely, is the future civilization still able to appreciate the works of Shakespeare?"

For do any of us want to live in a world where Shakespeare is obsolete? Imagine what that means -- that dynamics of people, of families, of parents and children, of relationships, of lovers and enemies, all these things, must have somehow become so incommensurate that the Bard has nothing to say about them anymore. That’s a horror. It is like leaving a baby in its crib at night and in the morning finding it metamorphosed into some indescribable mewling creature. There is still life, yes, but it’s incommensurate, and that’s a horror.

To see how to apply the Shakespeare test, let us consider four possible futures, each representative of a certain path that humanity might take in the big picture of history. While we’re at it, let each path have a figurehead. Then we can imagine the longterm future of humanity as a dirty fistfight between Friedrich Nietzsche, Alan Turing, Pierre Teilhard de Chardin, and William Shakespeare. Of the four, only Shakespeare’s strikes me as not obviously horrible... (MORE - details)
Reply
#2
Kornee Offline
IIRC the first 'serious' popularization of AI/robotics as existential threat to 'true humanity' was Fritz Lang's 1920s epic flick Metropolis.
One tentative comfort one can draw from the prospect of a SkyNet takeover is the large body of observational evidence for the supernatural.
As per e.g. 'The Baldoon Mystery', 'The Strange Case of Doctor-X', 'The Enfield Poltergeist Haunting', and as host of well verified UFO/UAP incidents that utterly defy conventional explanations.

Put together, all such suggests we as humans are deemed worthy of interacting with by entities far exceeding any conceivable material 'AI singularity super-intelligence'.
Then again, such entities didn't seem to lift a finger to thwart e.g. the unbelievably cruel and murderous reign of Genghis Kahn and successors. Etc.
So I dunno. Pros and cons all round.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Are Americans victims of an eye-exam scam via optometrists, industry, & politics? C C 0 206 Dec 2, 2019 06:47 PM
Last Post: C C
  Wireless cell communication via biophotons Magical Realist 0 596 Mar 14, 2016 12:02 AM
Last Post: Magical Realist



Users browsing this thread: 1 Guest(s)