Is ESP real? |
That's even less likely. The chances of the same blatant error committed across hundreds of different kinds studies would be astronomical. Ofcourse we know the usual tactic of positing hidden cues and shit just because you don't want to accept the results of the study. Card carrying skeptics are good at that.
(Dec 7, 2017 07:38 PM)Magical Realist Wrote: That's even less likely. The chances of the same blatant error committed across hundreds of different kinds studies would be astronomical. Ofcourse we know the usual tactic of positing hidden cues and shit just because you don't want to accept the results of the study. Card carrying skeptics are good at that. It's not a matter of each study sharing the same error. It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies. The "impact would have to be similar across experiments and laboratories" because the file-drawer effect ensures the impact is similar. Quote:It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies. No..there are no methodological flaws. And the results have been replicated time and again. Read Jessica Utts analysis of it. She is an expert in statistics. "Research on psychic functioning, conducted over a two decade period, is examined to determine whether or not the phenomenon has been scientifically established. A secondary question is whether or not it is useful for government purposes. The primary work examined in this report was government sponsored research conducted at Stanford Research Institute, later known as SRI International, and at Science Applications International Corporation, known as SAIC. Using the standards applied to any other area of science, it is concluded that psychic functioning has been well established. The statistical results of the studies examined are far beyond what is expected by chance. Arguments that these results could be due to methodological flaws in the experiments are soundly refuted. Effects of similar magnitude to those found in governmentsponsored research at SRI and SAIC have been replicated at a number of laboratories across the world. Such consistency cannot be readily explained by claims of flaws or fraud. The magnitude of psychic functioning exhibited appears to be in the range between what social scientists call a small and medium effect. That means that it is reliable enough to be replicated in properly conducted experiments, with sufficient trials to achieve the long-run statistical results needed for replicability." Here Radin talks about the Ganzfield experiments and debunks the claims of a file drawer effect: “At the annual convention of the Parapsychological Association in 1982, Charles Honorton presented a paper summarizing the results of all known ganzfeld experiments to that date. He concluded that the experiments at that time provided sufficient evidence to demonstrate the existence of psi in the ganzfeld… At that time, ganzfeld experiments had appeared in thirty-four published reports by ten different researchers. These reports described a total of forty-two separate experiments. Of these, twenty-eight reported the actual hit rates that were obtained. The other studies simply declared the experiments successful or unsuccessful. Since this information is insufficient for conducting a numerically oriented meta-analysis, Hyman and Honorton concentrated their analyses on the twenty-either studies that had reported actual hit rates. Of those twenty-eight, twenty-three had resulted in hit rates greater than chance expectation. This was an instant indicator that some degree of replication had been achieved, but when the actual hit rates of all twenty-eight studies were combined, the results were even more astounding than Hyman and Honorton had expected: odds against chance of ten billion to one. Clearly, the overall results were not just a fluke, and both researchers immediately agreed that something interesting was going on. But was it telepathy?” Radin further elaborates on how researcher Charles Honorton tested whether independent replications had actually been achieved: (page 79) “To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories.” On the same page, he then soundly dismisses the skeptical claim that the file-drawer effect (selective reporting) could skew the meta-analysis results in favor of psi: (page 79-80) “Another factor that might account for the overall success of the ganzfeld studies was the editorial policy of professional journals, which tends to favor the publication of successful rather than unsuccessful studies. This is the “file-drawer” effect mentioned earlier. Parapsychologists were among the first to become sensitive to this problem, which affects all experimental domains. In 1975 the Parapsychological Association’s officers adopted a policy opposing the selective reporting of positive outcomes. As a result, both positive and negative findings have been reported atg the Paraspsychological Association’s annual meetings and in its affiliated publications for over two decades. Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database. Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies. Thus far, the proponent and the skeptic had agreed that the results could not be attributed to chance or to selective reporting practices.”----http://www.debunkingskeptics.com/Page17.htm (Dec 7, 2017 07:58 PM)Magical Realist Wrote:Quote:It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies. Really? So where even widely-accepted studies often have methodological problems, no parapsychology studies do? O_o "Research studies are difficult to do right and easy to do wrong. There are many potholes to avoid, and many factors can impact a study’s validity and reliability." - https://www.scribd.com/document/21787992...n-research You doth protest to much. Quote:And the results have been replicated time and again. Read Jessica Utts analysis of it. She is an expert in statistics. Even she admits methodological errors (from the same source you seem to have quoted): http://www.ics.uci.edu/~jutts/air Quote:Here Radin talks about the Ganzfield experiments and debunks the claims of a file drawer effect: That's literally the file-drawer effect. Doesn't mention any weighting of results by unsuccessful experiments. https://www.csicop.org/sb/show/meta-anal...wer_effect Speaking of Hyman, he has said: I first addressed the apparent inconsistencies in parapsychological claims about the status of the evidence. Some parapsychologists, such as Jessica Utts and Dean Radin, repeatedly declare that the evidence for anomalous cognition is compelling and meets the most rigorous scientific standards of acceptability. Others such as Dick Bierman, Walter Lucadou, J.E. Kennedy, and Robert Jahn, openly admit that the evidence for psi is inconsistent, irreproducible, and fails to meet acceptable scientific standards. I quoted Radin’s statement in his 1997 book The Conscious Universe that “we are forced to conclude that when psi research is judged by the same standards as any other scientific discipline then the results are as consistent as those observed in the hardest of the hard sciences” (italics in the original). I also quoted from Jessica Utts’ 1995 Stargate report that, “Using the standards applied to any other area of science, it is concluded that psychic functioning has been established.” Both Radin and Utts were present during my presentation. Neither took this opportunity to retract these claims. I can only assume that they still stand behind these strong assertions. Quote:but the later SRI experiments and the SAIC work were done with reasonable methodological rigor, Oops..no methodological errors. And her list of methodological issues are simply guidelines to follow. She's not saying these experiments committed those errors. Quote:That's literally the file-drawer effect. Doesn't mention any weighting of results by unsuccessful experiments. Bullshit it doesn't. Here it is again: "Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database. Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies." Quote:Bierman concluded, “In spite of the fact that the evidence is very strong, these correlations are difficult to replicate.” Except for the fact that the Ganzfeld effect has been replicated numerous times. "To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories.” Here's a good historical overview of experiments on the Ganzfeld effect. https://psi-encyclopedia.spr.ac.uk/articles/ganzfeld |
« Next Oldest | Next Newest »
|
Users browsing this thread: 1 Guest(s)