Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Is ESP real?

#31
Magical Realist Offline
That's even less likely. The chances of the same blatant error committed across hundreds of different kinds studies would be astronomical. Ofcourse we know the usual tactic of positing hidden cues and shit just because you don't want to accept the results of the study. Card carrying skeptics are good at that.
Reply
#32
Syne Offline
(Dec 7, 2017 07:38 PM)Magical Realist Wrote: That's even less likely. The chances of the same blatant error committed across hundreds of different kinds studies would be astronomical. Ofcourse we know the usual tactic of positing hidden cues and shit just because you don't want to accept the results of the study. Card carrying skeptics are good at that.

It's not a matter of each study sharing the same error. It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies. The "impact would have to be similar across experiments and laboratories" because the file-drawer effect ensures the impact is similar.
Reply
#33
Magical Realist Offline
Quote:It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies.

No..there are no methodological flaws. And the results have been replicated time and again. Read Jessica Utts analysis of it. She is an expert in statistics.

"Research on psychic functioning, conducted over a two decade
period, is examined to determine whether or not the phenomenon has been
scientifically established. A secondary question is whether or not it is useful
for government purposes. The primary work examined in this report was government
sponsored research conducted at Stanford Research Institute, later
known as SRI International, and at Science Applications International Corporation,
known as SAIC.

Using the standards applied to any other area of science, it is concluded that
psychic functioning has been well established. The statistical results of the
studies examined are far beyond what is expected by chance. Arguments that
these results could be due to methodological flaws in the experiments are
soundly refuted. Effects of similar magnitude to those found in governmentsponsored
research at SRI and SAIC have been replicated at a number of laboratories
across the world. Such consistency cannot be readily explained by
claims of flaws or fraud.

The magnitude of psychic functioning exhibited appears to be in the range
between what social scientists call a small and medium effect. That means
that it is reliable enough to be replicated in properly conducted experiments,
with sufficient trials to achieve the long-run statistical results needed for
replicability."


Here Radin talks about the Ganzfield experiments and debunks the claims of a file drawer effect:

“At the annual convention of the Parapsychological Association in 1982, Charles Honorton presented a paper summarizing the results of all known ganzfeld experiments to that date. He concluded that the experiments at that time provided sufficient evidence to demonstrate the existence of psi in the ganzfeld…

At that time, ganzfeld experiments had appeared in thirty-four published reports by ten different researchers. These reports described a total of forty-two separate experiments. Of these, twenty-eight reported the actual hit rates that were obtained. The other studies simply declared the experiments successful or unsuccessful. Since this information is insufficient for conducting a numerically oriented meta-analysis, Hyman and Honorton concentrated their analyses on the twenty-either studies that had reported actual hit rates. Of those twenty-eight, twenty-three had resulted in hit rates greater than chance expectation. This was an instant indicator that some degree of replication had been achieved, but when the actual hit rates of all twenty-eight studies were combined, the results were even more astounding than Hyman and Honorton had expected: odds against chance of ten billion to one. Clearly, the overall results were not just a fluke, and both researchers immediately agreed that something interesting was going on. But was it telepathy?”

Radin further elaborates on how researcher Charles Honorton tested whether independent replications had actually been achieved: (page 79)

“To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories.”

On the same page, he then soundly dismisses the skeptical claim that the file-drawer effect (selective reporting) could skew the meta-analysis results in favor of psi: (page 79-80)

“Another factor that might account for the overall success of the ganzfeld studies was the editorial policy of professional journals, which tends to favor the publication of successful rather than unsuccessful studies. This is the “file-drawer” effect mentioned earlier. Parapsychologists were among the first to become sensitive to this problem, which affects all experimental domains. In 1975 the Parapsychological Association’s officers adopted a policy opposing the selective reporting of positive outcomes. As a result, both positive and negative findings have been reported atg the Paraspsychological Association’s annual meetings and in its affiliated publications for over two decades.

Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database.

Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies.

Thus far, the proponent and the skeptic had agreed that the results could not be attributed to chance or to selective reporting practices.”----http://www.debunkingskeptics.com/Page17.htm
Reply
#34
Syne Offline
(Dec 7, 2017 07:58 PM)Magical Realist Wrote:
Quote:It's a matter of different methodological flaws, and replicability problems, being obfuscated by poor statistical analysis in meta-studies.

No..there are no methodological flaws.

Really? So where even widely-accepted studies often have methodological problems, no parapsychology studies do? O_o

"Research studies are difficult to do right and easy to do wrong. There are many potholes to avoid, and many factors can impact a study’s validity and reliability." - https://www.scribd.com/document/21787992...n-research

You doth protest to much.  Rolleyes

Quote:And the results have been replicated time and again. Read Jessica Utts analysis of it. She is an expert in statistics.

Even she admits methodological errors (from the same source you seem to have quoted):

http://www.ics.uci.edu/~jutts/air
2.3 Methodological Issues

One of the challenges in designing a good experiment in any area of science is to close the loopholes that would allow explanations other than the intended one to account for the results.

There are a number of places in remote viewing experiment where information could be conveyed by normal means if proper precautions are not taken. The early SRI experiments suffered from some of those problems, but the later SRI experiments and the SAIC work were done with reasonable methodological rigor, with some exceptions noted in the detailed descriptions of the SAIC experiments in Appendix 2.

The following list of methodological issues shows the variety of concerns that must be addressed. It should be obvious that a well-designed experiment requires careful thought and planning:

* No one who has knowledge of the specific target should have any contact with the viewer until after the response has been safely secured.

* No one who has knowledge of the specific target or even of whether or not the session was successful should have any contact with the judge until after that task has been completed.

* No one who has knowledge of the specific target should have access to the response until after the judging has been completed.

* Targets and decoys used in judging should be selected using a well-tested randomization device.

* Duplicate sets of targets photographs should be used, one during the experiment and one during the judging, so that no cues (like fingerprints) can be inserted onto the target that would help the judge recognize it.

* The criterion for stopping an experiment should be defined in advance so that it is not called to a halt when the results just happen to be favorable. Generally, that means specifying the number of trials in advance, but some statistical procedures require or allow other stopping rules. The important point is that the rule be defined in advance in such a way that there is no ambiguity about when to stop.

* Reasons, if any, for excluding data must be defined in advance and followed consistently, and should not be dependent on the data. For example, a rule specifying that a trial could be aborted if the viewer felt ill would be legitimate, but only if the trial was aborted before anyone involved in that decision knew the correct target.

* Statistical analyses to be used must be planned in advance of collecting the data so that a method most favorable to the data isn't selected post hoc. If multiple methods of analysis are used the corresponding conclusions must recognize that fact.


Quote:Here Radin talks about the Ganzfield experiments and debunks the claims of a file drawer effect:

“At the annual convention of the Parapsychological Association in 1982, Charles Honorton presented a paper summarizing the results of all known ganzfeld experiments to that date.  He concluded that the experiments at that time provided sufficient evidence to demonstrate the existence of psi in the ganzfeld…

At that time, ganzfeld experiments had appeared in thirty-four published reports by ten different researchers.  These reports described a total of forty-two separate experiments.  Of these, twenty-eight reported the actual hit rates that were obtained.  The other studies simply declared the experiments successful or unsuccessful.  Since this information is insufficient for conducting a numerically oriented meta-analysis, Hyman and Honorton concentrated their analyses on the twenty-either studies that had reported actual hit rates.  

That's literally the file-drawer effect. Doesn't mention any weighting of results by unsuccessful experiments.

https://www.csicop.org/sb/show/meta-anal...wer_effect
"Douglas Stokes is a specialist in statistical analysis who is very sympathetic to the psi movement. In his book The Nature of Mind, Stokes considers the wide range of reports of psychic phenomena, from the anecdotal to the experimental. Although he concludes that psi has not been scientifically demonstrated, he still wants to believe in it based on the “compelling stories” he has heard and his own personal “spontaneous psi experiences.” Nevertheless, in an article published in Skeptical Inquirer (25[3], May/June 2001, 22-25), Stokes describes the fundamental errors that Radin and others have made in their calculations."


Speaking of Hyman, he has said:

I first addressed the apparent inconsistencies in parapsychological claims about the status of the evidence. Some parapsychologists, such as Jessica Utts and Dean Radin, repeatedly declare that the evidence for anomalous cognition is compelling and meets the most rigorous scientific standards of acceptability. Others such as Dick Bierman, Walter Lucadou, J.E. Kennedy, and Robert Jahn, openly admit that the evidence for psi is inconsistent, irreproducible, and fails to meet acceptable scientific standards. I quoted Radin’s statement in his 1997 book The Conscious Universe that “we are forced to conclude that when psi research is judged by the same standards as any other scientific discipline then the results are as consistent as those observed in the hardest of the hard sciences” (italics in the original). I also quoted from Jessica Utts’ 1995 Stargate report that, “Using the standards applied to any other area of science, it is concluded that psychic functioning has been established.” Both Radin and Utts were present during my presentation. Neither took this opportunity to retract these claims. I can only assume that they still stand behind these strong assertions.

I was hoping Radin and Utts would provide an explanation of how they can maintain such a position in the face of mounting evidence and arguments within the parapsychological community that the reality of psi cannot be justified according to accepted scientific standards. Dick Bierman, the Dutch parapsychologist, for example, carefully re-analyzed major meta-analyses of parapsychological research on mentally influencing the fall of dice, the Ganzfeld psi experiments, precognition with ESP cards, psychokinetic influence on RNGs, and mind over matter in biological systems (Bierman 2000). He looked especially at the relationship between effect size and the date when the studies in each of these research areas was conducted.

Bierman fitted a regression line to the data in each area. In all cases, the regression line revealed a consistent trend for the effect sizes to decrease with time and to eventually reach zero.1 In addition to these linear trends from the meta-analyses, Bierman and other parapsychologists point to dramatic failures of direct attempts to replicate major parapsychological findings. These particular failed replications cannot be dismissed as being due to low power, which is the excuse commonly offered by Utts, Radin, and a few others. Bierman concluded, “In spite of the fact that the evidence is very strong, these correlations are difficult to replicate.”
- https://www.csicop.org/si/show/anomalous...erspective

Reply
#35
Magical Realist Offline
Quote:but the later SRI experiments and the SAIC work were done with reasonable methodological rigor,

Oops..no methodological errors.

And her list of methodological issues are simply guidelines to follow. She's not saying these experiments committed those errors.

Quote:That's literally the file-drawer effect. Doesn't mention any weighting of results by unsuccessful experiments.

Bullshit it doesn't. Here it is again:


"Furthermore, a 1980 survey of parapsychologists by the skeptical British psychologist Susan Blackmore had confirmed that the file-drawer problem was not a serious issue for the ganzfeld meta-analysis. Blackmore uncovered nineteen complete but unpublished ganzfeld studies. Of those nineteen, seven were independently successful with odds against chance of twenty to one or greater. Thus while some ganzfeld studies had not been published, Hyman and Honorton agreed that selective reporting was not an important issue in this database.

Still, because it is impossible to know how many other studies might have been in file drawers, it is common in meta-analyses to calculate how many unreported studies would be required to nullify the observed effects among the known studies. For the twenty-eight direct-hit ganzfeld studies, this figure was 423 file-drawer experiments, a ratio of unreported-to-reported studies of approximately fifteen to one. Given the time and resources it takes to conduct a single ganzfeld session, let alone 423 hypotheitcal unrepoted experiments, it is not surprising that Hyman agreed with Honorton that the file-drawer issue could not plausibly account for the overall results of the psi ganzfeld database. There were simply not enough experimenters around to have conducted those 423 studies."

Quote:Bierman concluded, “In spite of the fact that the evidence is very strong, these correlations are difficult to replicate.”

Except for the fact that the Ganzfeld effect has been replicated numerous times.

"To address the concern about whether independent replications had been achieved, Honorton calculated the experimental outcomes for each laboratory separately. Significantly positive outcomes were reported by six of the ten labs, and the combined score across the ten laboratories still resulted in odds against chance of about a billion to one. This showed that no one lab was responsible for the positive results; they appeared across-the-board, even from labs reporting only a few experiments. To examine further the possibility that the two most prolific labs were responsible for the strong odds against chance, Honorton recalculated the results after excluding the studies that he and Sargent had reported. The resulting odds against chance were still ten thousand to one. Thus, the effect did not depend on just one or two labs; it had been successfully replicated by eight other laboratories.”

Here's a good historical overview of experiments on the Ganzfeld effect.

https://psi-encyclopedia.spr.ac.uk/articles/ganzfeld
Reply
#36
Syne Offline
Yeah, just keep ignoring all facts that don't support your beliefs.
Reply




Users browsing this thread: 1 Guest(s)