Halcyon said:Think for yourself much?
http://www.lfr.org/csl/media/air_mayresponse.shtml
http://anson.ucdavis.edu/~utts/91a-menu.html
http://anson.ucdavis.edu/~utts/air2.html#2.1
http://anson.ucdavis.edu/~utts/91rmp.html
http://www.biomindsuperpowers.com/P...nitiatedRV.html
http://comp9.psych.cornell.edu/dbem/does_psi_exist.html
http://comp9.psych.cornell.edu/dbem/psi_world.html
http://anson.ucdavis.edu/~utts/azpsi.html
Very small taste of the scientific and peer reviewed literature out there on the subject.
One could point that first question right back at many of the proponents of so-called "psi." Interestingly enough, I had read much of the work you cited in the last year, though not because of this post. I had actually long forgotten about this post and suspect that I was in the middle of a couple of papers for school.
But since it was so kindly pointed out by another member of sciforums and bumped, I'm happy to discuss it.
The reason I say one could ask, "think much for yourself," of the "psi" proponent is that it seems typical that they're willing to mention figures in "psi" like Utts, Bem, Honorton, Rosenthal, Puthoff, etc. without really discussing the merits of their works or the specific points of their "proofs." That certainly isn't always the case and certainly not with you, Halcyon. It appears to be with regard to the post quoted above, but I can only assume that you planned on seeding the discussion and getting back to these points later after others have had the chance to read the links above.
But nearly all of the links above have their roots in the Ganzfeld procedure and meta-analyses, such as that conducted by Bem (1994). For those reading this thread who aren't familiar with the term meta-analysis, this is the process by which quantitative analysis is done by evaluating the results of several studies, sometimes allowing the researcher to create a larger data set with which to sample for more accurate statistics. If done well with studies that are consistent and clean from bias to begin with, the results can prove quite usable.
But if the process isn't carefully controlled and strictly filtered for bias, the results are just as easily skewed. Tiny factors in a meta-analysis can cause drastic changes in results. Much like firing a bullet at a distant target: if the aim is true, the bullet will be; if the aim is a millimeter off at the rifle, the result can be a meter at the target.
With regard to the links you posted above, at least half, if not more, are critical of the two main sources: the Utts paper and the Ganzfeld Procedure, which much of her work is based on apparently. If anything, the links you provided support the quote of mine that you include as a prologue to your post, specifically, "…attempts to measure the paranormal junk like telekinesis have always resulted in failure, refusal, or significant [lack of conclusion]."
Hyman noted the inconsistencies with the ganzfeld experiments that both Utts and Bem rely upon heavily for their meta-analyses. Hyman states (1991): "I was surprised to find that the ganzfeld experiments, widely heralded as the best examplar of a successful research program in parapsychology, were characterized by obvious possibilities for sensory leakage, inadequate randomization, over-analysis and other departures from parapsychology's own professed standards. One response was to argue that I had exaggerated the number of flaws. But even internal critics agreed that the rate of defects in the ganzfeld data base was too high." These comments are after the rejoinders of Honorton and the Utts paper's completion.
Indeed, Milton and Wiseman (1999) "present a meta-analysis of 30 ganzfeld ESP studies from 7 independent laboratories adhering to the same stringent methodological guidelines that C. Honorton followed. The studies failed to confirm his main effect of participants scoring above chance on the ESP task..." Milton and Wiseman concluded that the "ganzfeld technique does not at present offer a replicable method for producing ESP in the laboratory." In short, it was shown to be bunk at worst, inconclusive at best.
There is nothing in science that shows empirically that "psi" exists. Nor is there any sort of theory as to what would cause "psi" to work if it did exist. By theory, I am not talking in the colloquial sense where a "speculation" equals a "theory." I'm referring to a set of one or more hypotheses that have been tested.
The links you provided were, indeed, a "small taste of the scientific and peer reviewed literature out there on the subject," Halcyon, but none come close to being that which clearly demonstrate replicable and reliable methodologies of qualitative (opposed to quantitative) design, which prove any "psi" ability.
My last comment is with regard to one of Utts concluding remarks (1991) when she says, "research in parapsychology should receive more support from the scientific community [...]if ESP does exist, there is much to be lost by not doing process-oriented research..."
Science shouldn't waste it's time in such nonsense. It detracts from real work that can be accomplished in so many other, more valuable fields of research from genetics to neuroscience that piddling around with flash-cards checking whether or not someone can guess the card is a waste of not only time but money. If there were any substance to "psi" poppycock such as "remote viewing" and "telekinesis," there would be someone capable of clearly demonstrating it and providing a tangible point of beginning true research. But parlor tricks and cold-reading techniques aren't useful.
References:
Bem, Daryl J. and Charles Honorton (1994). Does Psi Exist? Psychological Bulletin, 115(1), 4-18.
Hyman, Ray (1991). In response to the paper Replication and Meta-Analysis in Parapsychology by Jessica Utts. Statistical Science," 6 (4), 389-392.
Milton, J. and R.Wiseman (1999). Does psi exist? Lack of replication of an anomalous process of information transfer. Psychological Bulletin, 125(4), 387-391.
Utts, Jessica (1994). Replication And Meta-Analysis In Parapsychology. Statistical Science, 6(4), 363-403.