Quote:It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies.
No..there are no methodological flaws. And the results have been replicated time and again. Read Jessica Utts analysis of it. She is an expert in statistics.
"Research on psychic functioning, conducted over a two decade
period, is examined to determine whether or not the phenomenon has been
scientifically established. A secondary question is whether or not it is useful
for government purposes. The primary work examined in this report was government
sponsored research conducted at Stanford Research Institute, later
known as SRI International, and at Science Applications International Corporation,
known as SAIC.
Using the standards applied to any other area of science, it is concluded that
psychic functioning has been well established. The statistical results of the
studies examined are far beyond what is expected by chance. Arguments that
these results could be due to methodological flaws in the experiments are
soundly refuted. Effects of similar magnitude to those found in governmentsponsored
research at SRI and SAIC have been replicated at a number of laboratories
across the world. Such consistency cannot be readily explained by
claims of flaws or fraud.
The magnitude of psychic functioning exhibited appears to be in the range
between what social scientists call a small and medium effect. That means
that it is reliable enough to be replicated in properly conducted experiments,
with sufficient trials to achieve the long-run statistical results needed for
replicability."
Here Radin talks about the Ganzfield experiments and debunks the claims of a file drawer effect:
“At the annual convention of the Parapsychological Association in 1982, Charles Honorton presented a paper summarizing the results of all known ganzfeld experiments to that date. He concluded that the experiments at that time provided sufficient evidence to demonstrate the existence of psi in the ganzfeld…
At that time, ganzfeld experiments had appeared in thirty-four published reports by ten different researchers. These reports described a total of forty-two separate experiments. Of these, twenty-eight reported the actual hit rates that were obtained. The other studies simply declared the experiments successful or unsuccessful. Since this information is insufficient for conducting a numerically oriented meta-analysis, Hyman and Honorton concentrated their analyses on the twenty-either studies that had reported actual hit rates. Of those twenty-eight, twenty-three had resulted in hit rates greater than chance expectation. This was an instant indicator that some degree of replication had been achieved, but when the actual hit rates of all twenty-eight studies were combined, the results were even more astounding than Hyman and Honorton had expected: odds against chance of ten billion to one. Clearly, the overall results were not just a fluke, and both researchers immediately agreed that something interesting was going on. But was it telepathy?”
Radin further elaborates on how researcher Charles Honorton tested whether independent replications had actually been achieved: (page 79)
“To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories.”
On the same page, he then soundly dismisses the skeptical claim that the file-drawer effect (selective reporting) could skew the meta-analysis results in favor of psi: (page 79-80)
“Another factor that might account for the overall success of the ganzfeld studies was the editorial policy of professional journals, which tends to favor the publication of successful rather than unsuccessful studies. This is the “file-drawer” effect mentioned earlier. Parapsychologists were among the first to become sensitive to this problem, which affects all experimental domains. In 1975 the Parapsychological Association’s officers adopted a policy opposing the selective reporting of positive outcomes. As a result, both positive and negative findings have been reported atg the Paraspsychological Association’s annual meetings and in its affiliated publications for over two decades.
Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database.
Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies.
Thus far, the proponent and the skeptic had agreed that the results could not be attributed to chance or to selective reporting practices.”----
http://www.debunkingskeptics.com/Page17.htm