Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  The toxic ideology of longtermism (allocation of philanthropic money & resources)

#1
C C Offline
https://www.radicalphilosophy.com/commen...ongtermism

EXCERPTS: The intellectual movement that calls itself longtermism is an outgrowth of Effective Altruism (EA), a utilitarianism-inspired philanthropic programme founded just over a decade ago by Oxford philosophers Toby Ord and William MacAskill. EA, which claims to guide charitable giving to do the ‘most good’ per expenditure of time or money, originally focused on mitigating the effects of poverty in the global South and the treatment of animals in factory farms. This initially modestly-funded, Oxford-based enterprise soon had satellites in the UK, US and elsewhere in the world, several of which became multi-million-dollar organisations, while the amount of money directed by EA-affiliated groups swelled to over four hundred million dollars annually, with pledges in the tens of billions.

During this period, Ord and MacAskill starting using the term ‘longtermism’ to mark a view championed by members of a conspicuous subset of effective altruists, many affiliated with Oxford University’s Future of Humanity Institute. The view is that humanity is at a crossroads at which we may either self-destruct or realise a glorious future, and that we should prioritise responding to threats to the continued existence of human civilisation.

The ‘existential risks’ – to use the term introduced by Oxford philosopher and Future of Humanity Institute founder Nick Bostrom – that longtermists rank as most probable are AI unaligned with liberal values and deadly engineered pathogens. They urge us to combat these risks to make it likelier that humans (or our digitally intelligent descendants) will live on for millions, billions or even trillions of years, surviving until long after the sun has vaporised the earth, by colonising exoplanets.

[...] This critique of longtermism is correct as far as it goes. It is also desperately incomplete. One thing it fails to capture is that an uncritical attitude toward existing political and economic institutions is part of longtermism’s philosophical DNA.

The point of departure for longtermism is EA, and, like other utilitarianism-inspired doctrines, EA veers towards forms of welfarism that are unthreatening to the status quo. This posture increasingly exposed EA to corruption during its growth into a broad-scale philanthropic movement. EA shares the tendency of large charitable foundations to undemocratically organise entire realms of public engagement, diverting money and other resources from movements for liberating social change. And it owes its ability to secure the funding requisite for this role to its affinity with political and economic systems generative of the suffering it claims to address.

Longtermism’s sins are different and more ominous, but there are points of convergence. Longtermism deflects from EA’s wonted attention to current human and animal suffering. It defends in its place a concern for the wellbeing of the potentially trillions of humans who will live in the long-term future, and, taking the sheer number of prospective people to drown out current moral problems, exhorts us to regard threats to humanity’s continuation as a moral priority, if not the moral priority. This makes longtermists shockingly dismissive of ‘non-existential’ hazards that may result in the suffering and death of huge numbers in the short term if, as they see it, there is a reasonable probability that the hazards are consistent with the possibility of a far greater number of humans going on to flourish in the long term.

When longtermists turn to existential hazards, they discuss wholly natural threats (such as large asteroids hurtling toward the earth, super-volcanic eruptions and stellar explosions) while focusing on human-caused risks, which they regard as more likely to rise to extinction-level. Alongside value-divergent AI and human-produced pathogens, they consider climate change, other forms of environmental degradation, and all-out nuclear war, and they set out to calculate the probability that these different anthropogenic threats will instigate existential disasters. This accent on existential dangers is theoretically unjustified and morally damaging, but even stripped of it, longtermism is a poor guide to solicitude for prospective humans.

Longtermism calls on us to safeguard humanity’s future in a manner that both diverts attention from current misery and leaves harmful socioeconomic structures critically unexamined. As a movement, it has enjoyed stunning financial success and clout. But its success is not due to the quality of its conception of morality, which builds questionably on EA’s. Rather, it is due to longtermism’s compatibility with the very socioeconomic arrangements that have led us to the brink of the kinds of catastrophes it claims to be staving off... (MORE - missing details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Money actually can buy happiness, study says C C 3 212 Jan 29, 2021 11:03 PM
Last Post: confused2
  Pandemic stimulus bills paid for by printing money (fiscal data) C C 0 145 May 16, 2020 08:36 PM
Last Post: C C
  Data: Toxic chemicals from fracking wastewater spills can persist for years C C 0 394 May 22, 2016 07:43 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)