Raithere said:
Okay so lets say 25 or 30. The point is that a small group requires a much smaller variation. 0.05 as the point of significance for alpha is somewhat arbitrary anyway and is hardly what I would consider strong evidence but as it is conventional for this kind of study let's run with it.
As I said earlier, sample size is factored in (P is proportional to the square root of the sample size) i.e. for a larger sample, the same deviation becomes more significant. To get a similar P (~0.05) for throwing >28 heads with 40 coins requires >8 heads with 10 coins. A significance of P=0.05 seems to be the accepted cut-off for significance in these sorts of trials.
Raithere said:
So their study concluded in January of 1997 and the study they reference in regards to changing their endpoint was published in September of 1997. Clearly after their study was concluded, which would seem to support Moore's account as reported by Bronson.
Not really. They didn’t have to wait until the other paper was published to know that only 1 patient on their study had died. It would have made no sense to plan to rely on mortality rates alone as a measure of effectiveness.
Raithere said:
I also find it problematic that they assumed an effect only upon survival rate when the earliest studies of triple-drug therapy indicated primarily an effect upon disease progression. […] It says nothing about which combinations of drugs, when they were first administered, dosage levels, etc. The failure to account for such variables alone casts a broad shadow over any conclusion one might draw however problematic the methodology.
The use of triple therapy during the trial is undoubtedly a complicating factor. However, it is noteworthy that the control group score consistently higher for receipt of this therapy than the ‘treatment’ group. The CD4+ counts for both groups show no significant differences – both groups remain low. (They acknowledge that viral load tests would with hindsight be a more accurate measure of triple therapy effectiveness). One can conclude that triple therapy almost certainly did not bias the results in favour the ‘DH treatment’ group.
Raithere said:
Actually it is, once you account for the difference in the baseline POMS scores. They even say so:
They are understandably cautious about the POMS results. The initial analysis shows a significant improvement in POMS score in the treatment group over the control. The covariate analysis is a further attempt to control for the “effect of increased hope or expectation due to their participation in an intervention research study” and clearly produces strange results. They openly state all this in the discussion:
Detailed analysis of baseline variables differing at P < 0.20 did find that higher baseline POMS scores were associated with greater improvement in POMS scores over the course of the study. By chance, patients in the treatment group showed more psychological distress at baseline, so their improved mood over the study interval may represent simply an effect of increased hope or expectation due to their participation in an intervention research study. The additional finding that adjusting for differences in baseline POMS caused a change in the direction of the beneficial effect is difficult to understand and is likely due to chance.
However, Bronson’s reportage describes an atmosphere of near hysteria breaking out as the researchers allegedly find NO correlation in either the symptoms or the POMS scores.
Targ asked him to crunch the numbers on the secondary scores - one a measure of HIV physical symptoms, the other a measure of quality of life. These came out inconclusive; the treatment group didn't score better than the control. Not what they wanted to find. In dismay, Targ called her father. He calmed her down, told her to keep looking. She had Moore run the mood state scores. These came out worse - the treatment group was in more psychological stress than the control group.
This is clearly contradicted by looking at the actual results which show significant BHS and POMS improvements in the treatment vs. control groups. I therefore think his account (allegedly based on what Moore told him) owes much to Bronson’s journalistic imagination and desire for a scoop, and his allegations would certainly not stand up as the basis for a case in a court of law!
Raithere said:
More interesting, I find, is that they ranked illnesses according to the BHS, which is strange considering the BHS is used to predict survival rates which they supposedly were no longer looking at.
Not so strange when you consider that it was the best indicator of overall severity of AIDS – and therefore an ideal marker of the progress of the disease.
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1380834
Raithere said:
Seems to me Bronson is pretty much on track so far.
I don’t think so. There are several reasons I am dubious of Bronson’s account.
1) Why would Moore and Targ’s husband (Comings) tell a reporter (Bronson) how the recently deceased Targ, along with Sicher and Moore fiddled their research results?
2) There seems to be no corroboration by Moore, Comings or Sicher relating these accounts by any other reporter, or in any other media.
3) Bronsons statement that “Moore seemed unaware how explosive his version of the story was” seems unlikely given Bronson’s allegations and Moore’s background as a research statistician.
4) Bronsons description of Targ’s hysterical behaviour was not consistent with his earlier description of her cool “refusal to speculate” and her position of “Use the scientific method to find out if an effect exists before trying to analyse how it works”.
5) Bronson’s description of the Distance Healers as “The usual wackos” indicates a profound scepticism not to say contempt for this research.
6) A Bronson scoop debunking “the most scientifically rigorous” prayer research would be widely publicised and represented a very good career move for him personally.
7) His accusations and reporting of the conversations alleged to have taken place don’t sit easily with the details of what is in the paper. They certainly would not stand up as the basis for a legal case.
Raithere said:
Further, as no causative pathway has even been defined even if we assume the veracity of their results how can one make claim to anything beyond a bare correlation? Interesting? Possibly... but worth another $1.5 million? Particularly when the effect could be detrimental to the patients.
I return to my original point: Research results on prayer currently show very mixed results. I echo the conclusions reached by a number of recent meta-review papers including the following from
http://www.annals.org/cgi/content/abstract/132/11/903
The methodologic limitations of several studies make it difficult to draw definitive conclusions about the efficacy of distant healing. However, given that approximately 57% of trials showed a positive treatment effect, the evidence thus far merits further study.
I'm don't know whether prayer (distant healing) of this kind works or is amenable to scientific scrutiny - certainly there are a huge number of difficulties and unknowns that make it very different to a drug trial. However I feel we should pursue such research with open-minded skepticism .