Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

How warnings of AI doom gave way to primal fear of primates posting

#1
C C Offline
Welcoming our new robot overlords - How warnings of AI doom gave way to primal fear of primates posting
https://www.thenewatlantis.com/publicati...-overlords

EXCERPTS: Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us. We would be lucky to be domesticated as pets kept around for the amusement of superior entities, who could kill us all as easily as we exterminate pests.

Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperiled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.

Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?

[...] Since the 2016 presidential election, fears about machine dystopia do not seem like nearly such a preoccupation. Instead, attention has shifted to online radicalization, misinformation, and harassment. This distinction may seem like two ways of talking about the same thing. After all, many tech critics ultimately place the blame for these online dysfunctions on software that encourages toxic behavior and on companies’ lax moderation policies. Perhaps fear of machine revolt has just morphed into generic fear about out-of-control algorithms that, among other things, fuel hatred, fear, and suspicion online. There is some truth to this, but it also misses important differences.

While online behavior is certainly shaped by platform mechanisms, the fear today is less of the mechanisms themselves than of whom they’re enticing. Prior emphasis on the machine threat warned of the unpredictability of automated behavior and the need for humans to develop policies to control it. Today’s emphasis on the social media terror inverts this, warning of the danger posed by unchecked digital mobs, who must be controlled. The risk comes not from the machines but from ourselves: our vulnerability to deception and manipulation, our need to band together with others to hunt down and accost our adversaries online, our tendency to incite and be incited by violent rhetoric to act out in the physical world, and our collective habit of spiraling down into correlated webs of delusion, insanity, and hatred.

While amenable in theory to fears of machine malevolence, there is no real mechanical equivalent in this picture to the central role played by the runaway machine of old. Actually, the roles of humans and machines have switched: The machines must now restrain the dangerous humans...

[...] Automated moderation of content was never without criticism. Outsiders from across the political spectrum argued that it was opaque, unaccountable, and provided at best a fig leaf of liberal proceduralism. It was an attempt to pass off a robotic version of “rule by law” as “rule of law.” But the criticism was drowned out by even louder demands since 2016 to crack down harder on online misinformation and extremism.

These demands reached a crescendo between early 2020 and early 2021 — between the start of the pandemic and the Capitol attack...

[...] The growing intellectual consensus is that a vulnerable public must be protected from having their minds hijacked by dangerous online memes. An ugly and messy struggle for control over online communication looms. Fears of machine revolt have faded, but the very machines that were once seen as future tyrants — automated systems — must now save humans from themselves.

[...] Consider how social networks have systematized mass imitation. Geoff Shullenberger, in a July 2020 Tablet essay on the dynamics of online mobs, shows how social networks remove an underlying constraint on older forms of collective aggression. The hardest part of getting a group to be aggressive is usually the first act of aggression, not because of the cowardly unwillingness to throw the first stone but rather because throwing the first stone is an act without a pre-existing template. Nobody knows in advance exactly what kind of act may become contagious, and at what time.

But social networks provide a template for throwing the first stone: the public-shaming post. With a template for this first act, contagious aggression can piggyback off of the natural human tendency to emulate pre-existing behaviors and attitudes. Not only do social networks make it easier to both generate and replicate mass aggression; they also provide viral fame as a reward for the first person bold enough to throw the first stone. “It’s unsurprising, then, that some users are trying to make a name for themselves on the basis of first-stone throwing,” writes Shullenberger.

Now, mass imitation online may occur even without anyone really knowing who threw the first stone, and perhaps nobody knowingly did. This is one of the ways in which known and hidden desires combine online and produce terrifying mass behaviors of which we cannot clearly say whether anyone really intended them. This is because even when people act out of sincere belief in a particular cause — say, exposing a racially insensitive op-ed — the template for expressing that belief online, for example through public shaming, and for imitating it may dictate the behavior and so escape our control.

[...] The premodern world was fascinated with “non-human agents” that had the ability to induce or impose meanings independent of our knowledge and control, and to bring about outcomes in the physical world. Premoderns felt themselves to be fundamentally “porous” in nature, unable to prevent the vulnerable self from being impinged upon by spirits and demonic forces. The vulnerable self required refuge within ordered societies that used common rituals and folkways to keep the bad magic at bay. Heresy was dangerous because even a single heretic could throw the safety of the community in doubt by diluting the purity that the rituals were painstakingly intended to maintain.

In contrast, the idealized modern human is — or rather was — autonomous, rational, and secured against outside harmful influences. Key to the emergence of the modern age was the rise of what Taylor calls the “buffered self.” This new self, Sacasas explains, “no longer perceives and believes in sources of meaning outside of the human mind” and is not disturbed by “powers beyond its control.” If the porous self is beset on all sides by harmful external forces, prone to corruption and manipulation from the unseen, and in perpetual need of community protection, the rational and autonomous modern self is confidently able to think and act alone due to its inviolability and stability. Meaning for moderns is only created by individual human minds — the only minds that count. The mind is sealed off from the world, autonomous and self-driven, and unconcerned with matters beyond its control. It is capable of tolerating heresy, because heresy is at best an intellectual error, not a potentially catastrophic event that leads to the compromise of the entire community.

This image of the modern self already began to crumble during the Cold War with the emergence of the forces of technological automation that Norbert Wiener discussed. But in the twenty-first century it has broken entirely. Sacasas argues that in our digital era “certain features of the self in an enchanted world are now re-emerging.” “Digital technologies influence us and exert causal power over our affairs,” and we find ourselves suddenly aware once again of how we operate “within a field of inscrutable forces over which we have little to no control.” These forces are not spirits or demons but rather “bots and opaque algorithmic processes, which alternately and capriciously curse or bless us.” (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Should We Fear Artificial Intelligence? cluelusshusbund 13 2,856 Apr 11, 2015 09:23 PM
Last Post: cluelusshusbund



Users browsing this thread: 1 Guest(s)