Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Post-empirical science is an oxymoron, and it is dangerous

#1
C C Offline
https://aeon.co/essays/post-empirical-sc...-dangerous

EXCERPT (Jim Baggott): There is no agreed criterion to distinguish science from pseudoscience, or just plain ordinary bullshit, opening the door to all manner of metaphysics masquerading as science. This is ‘post-empirical’ science, where truth no longer matters, and it is potentially very dangerous.

[...] Despite appearances, science offers no certainty. Decades of progress in the philosophy of science have led us to accept that our prevailing scientific understanding is a limited-time offer, valid only until a new observation or experiment proves that it’s not. It turns out to be impossible even to formulate a scientific theory without metaphysics, without first assuming some things we can’t actually prove, such as the existence of an objective reality and the invisible entities we believe to exist in it. This is a bit awkward because it’s difficult, if not impossible, to gather empirical facts without first having some theoretical understanding of what we think we’re doing...

Yet history tells us quite unequivocally that science works. It progresses. We know (and we think we understand) more about the nature of the physical world than we did yesterday; we know more than we did a decade, or a century, or a millennium ago. The progress of science is the reason we have smartphones, when the philosophers of Ancient Greece did not.

Successful theories are essential to this progress. ... this brings us to one of the most challenging problems emerging from the philosophy of science: its strict definition. When is something ‘scientific’, and when is it not? In the light of the metaphor above, how do we know when science is being ‘done properly’? This is the demarcation problem, and it has an illustrious history. (For a more recent discussion, see Massimo Pigliucci’s essay ‘Must Science Be Testable?’ on Aeon).

The philosopher Karl Popper argued that what distinguishes a scientific theory from pseudoscience and pure metaphysics is the possibility that it might be falsified on exposure to empirical data. In other words, a theory is scientific if it has the potential to be proved wrong.

[...] But it was never going to be as simple as this. Applying a theory typically requires that – on paper, at least – we simplify the problem by imagining that the system we’re interested in can be isolated, such that we can ignore interference from the rest of the Universe. In his book Time Reborn (2013), the theoretical physicist Lee Smolin calls this ‘doing physics in a box’, and it involves making one or more so-called auxiliary assumptions. Consequently, when predictions are falsified by the empirical evidence, it’s never clear why. It might be that the theory is false, but it could simply be that one of the auxiliary assumptions is invalid. The evidence can’t tell us which.

[...] Such problems were judged by philosophers of science to be insurmountable, and Popper’s falsifiability criterion was abandoned (though, curiously, it still lives on in the minds of many practising scientists). But rather than seek an alternative, in 1983 the philosopher Larry Laudan declared that the demarcation problem is actually intractable, and must therefore be a pseudo-problem. He argued that the real distinction is between knowledge that is reliable or unreliable, irrespective of its provenance, and claimed that terms such as ‘pseudoscience’ and ‘unscientific’ have no real meaning.

But, for me at least, there has to be a difference between science and pseudoscience; between science and pure metaphysics, or just plain ordinary bullshit. So, if we can’t make use of falsifiability, what do we use instead? I don’t think we have any real alternative but to adopt what I might call the empirical criterion. Demarcation is not some kind of binary yes-or-no, right-or-wrong, black-or-white judgment. We have to admit shades of grey. Popper himself was ready to accept this (the italics are mine):

. . . the criterion of demarcation cannot be an absolutely sharp one but will itself have degrees. There will be well-testable theories, hardly testable theories, and non-testable theories. Those which are non-testable are of no interest to empirical scientists. They may be described as metaphysical.

Here, ‘testability’ implies only that a theory either makes contact, or holds some promise of making contact, with empirical evidence. It makes no presumptions about what we might do in light of the evidence. If the evidence verifies the theory, that’s great – we celebrate and start looking for another test. If the evidence fails to support the theory, then we might ponder for a while or tinker with the auxiliary assumptions. Either way, there’s a tension between the metaphysical content of the theory and the empirical data – a tension between the ideas and the facts – which prevents the metaphysics from getting completely out of hand. In this way, the metaphysics is tamed or ‘naturalised’, and we have something to work with. This is science.

Now, this might seem straightforward, but we’ve reached a rather interesting period in the history of foundational physics. Today, we’re blessed with two extraordinary theories. The first is quantum mechanics. ... The second is Einstein’s general theory of relativity ... These two standard models explain everything we can see in the Universe. Yet they are deeply unsatisfying...

[...] These are very, very stubborn problems, and our best theories are full of explanatory holes. Bringing them together in a putative theory of everything has proved to be astonishingly difficult. Despite much effort over the past 50 years, there is no real consensus on how this might be done. And, to make matters considerably worse, we’ve run out of evidence. The theorists have been cunning and inventive. They have plenty of metaphysical ideas but there are no empirical signposts telling them which path they should take. They are ideas-rich, but data-poor.

They’re faced with a choice. Do they pull up short, draw back and acknowledge that, without even the promise of empirical data to test their ideas, there is little or nothing more that can be done in the name of science? Do they throw their arms in the air in exasperation and accept that there might be things that science just can’t explain right now?

Or do they plough on regardless, publishing paper after paper filled with abstract mathematics that they can interpret to be explanatory of the physics, in the absence of data, for example in terms of a multiverse? Do they not only embrace the metaphysics but also allow their theories to be completely overwhelmed by it? Do they pretend that they can think their way to real physics, ignoring Einstein’s caution: "Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations – in short, by metaphysics."

I think you know the answer. But to argue that this is nevertheless still science requires some considerable mental gymnastics. Some just double-down. The theoretical physicist David Deutsch has declared that the multiverse is as real as the dinosaurs once were, and we should just ‘get over it’. Martin Rees, Britain’s Astronomer Royal, declares that the multiverse is not metaphysics but exciting science, which ‘may be true’, and on which he’d bet his dog’s life. Others seek to shift or undermine any notion of a demarcation criterion by wresting control of the narrative. One way to do this is to call out all the problems with Popper’s falsifiability that were acknowledged already many years ago by philosophers of science. Doing this allows them to make their own rules, while steering well clear of the real issue – the complete absence of even the promise of any tension between ideas and facts.

Sean Carroll, a vocal advocate for the Many-Worlds interpretation, prefers abduction, or what he calls ‘inference to the best explanation’, which leaves us with theories that are merely ‘parsimonious’, a matter of judgment, and ‘still might reasonably be true’. But whose judgment? In the absence of facts, what constitutes ‘the best explanation’?

Carroll seeks to dress his notion of inference in the cloth of respectability provided by something called Bayesian probability theory, happily overlooking its entirely subjective nature. It’s a short step from here to the theorist-turned-philosopher Richard Dawid’s efforts to justify the string theory programme in terms of ‘theoretically confirmed theory’ and ‘non-empirical theory assessment’. The ‘best explanation’ is then based on a choice between purely metaphysical constructs, without reference to empirical evidence, based on the application of a probability theory that can be readily engineered to suit personal prejudices.

Welcome to the oxymoron that is post-empirical science... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article My letter to the "Washington Post" on race + SC research damaged by retractions C C 0 81 Oct 23, 2023 05:09 PM
Last Post: C C
  Lost city of Atlantis rises again to fuel a dangerous myth C C 0 249 Nov 28, 2022 08:50 PM
Last Post: C C
  The dangerous populist science of Yuval Noah Harari C C 1 335 Oct 28, 2022 07:46 AM
Last Post: Kornee
  Is ‘tomato flu’ dangerous or all hype? C C 0 138 Aug 25, 2022 04:16 PM
Last Post: C C
  Interview with Lee McIntyre: Science denial & post-truth (on our new Dark Age) C C 2 275 Jul 25, 2021 05:41 PM
Last Post: Yazata
  Scientific American says 5G is dangerous, vegetables are junk food C C 0 243 Oct 23, 2019 03:03 PM
Last Post: C C
  Few things are as dangerous as economists with physics envy C C 0 249 Feb 12, 2018 08:59 PM
Last Post: C C
  Book Review: "Evidence for Psi - Thirteen Empirical Research Reports" C C 0 476 Aug 7, 2015 06:04 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)