Theism vs. Atheism - Experience or Interpretation?

Is theism vs. atheism primarily a difference of interpretation or experience?

  • Theism and atheism are primarily different interpretations of similar experiences.

    Votes: 21 51.2%
  • Theism and atheism lead to very different experiences.

    Votes: 12 29.3%
  • Some other view.

    Votes: 8 19.5%

  • Total voters
    41
KennyJC said:
How rather typical for people to imagine God only in the pretty parts of life.

I volunteer that God works through me when I have a bad curry and I produce a stinking pile of watery poo.

And shouldn't the lyrics to "all things bright and wonderful" be changed to make mention of the parasitic wasp which eats worms from inside? All God's work my friends.

At least your silly notion would have some depth to it if you bear mention to these things.

Pointless getting into a meaningful debate with a rabbit brain but as a parting note, Beethoven suffered depression and despair - hardly 'pretty'.

If shit is all you can write then presumably that is all you are mindful of.

Goodnight.
 
All I have done is respond to baseless superstitious notions with... baseless superstitious notions.

God shows himself through my watery poo just as he does a symphony. In fact the noises my bottom makes whilst expelling my watery poo, could be considered a symphony. Gawd shows himself in mysterious ways.

Chally-ho.
 
euphrosene said:
Speaking personally, there is a real experience of some higher power ie outside of myself, yet equally being at one with it...

Can you quantify it or distinguish it from your imagination?

Beethoven, Handel and many huge creative talents wrote that their God worked through them.... or that it was not them, but God.

Humble folks, not wanting to blow their own horns, so to speak. Those people were theists, were they not?

As a substantially lesser talent, I can still empathise with that when I write or paint something rather good.... although I might actually use words like 'God alone knows where that came from '...

Too bad you don't give credit where credit is due.
 
(Q) said:
Can you quantify it or distinguish it from your imagination?



Humble folks, not wanting to blow their own horns, so to speak. Those people were theists, were they not?



Too bad you don't give credit where credit is due.

Morning Q.. I'd like to answer this properly when I have more time. It's something to do with the 'observer' and 'obversed'.

Also, while I agree that imagination may play a part, where 'will' is placed on standby, there appears to be another force involved, for me.
 
Diogenes' Dog said:
Not so Raithere!! There have been many scientific studies on prayer. Here are 5 for a start (apart from Byrds), some showing positive effects (1,2 &3) others no effects (4 & 5). It's not so clear cut as infidels.org would have you believe!
Sorry, but yes it is. There are so many problems with these studies that they are pretty much garbage to begin with and even those that return some result the effect is so marginal (and often not the objective when the study began) that they are worthless.

1) Sicher F, Targ E, Moore D 2nd, Smith HS. A randomized double-blind study of the effect of distant healing in a population with advanced AIDS. Report of a small scale study. West J Med 1998;169:356-63. PMID 9866433.

http://skepdic.com/sichertarg.html

2) Harris WS, Gowda M, Kolb JW, Strychacz CP, Vacek JL, Jones PG, Forker A, O'Keefe JH, McCallister BD. A randomized, controlled trial of the effects of remote, intercessory prayer on outcomes in patients admitted to the coronary care unit. Arch Intern Med 1999;159:2273-8. PMID 10547166.

http://www.skeptics.com.au/journal/2003/3.pdf#search="+Strychacz +"intercessory prayer" +skeptic" (page 67)

3) Leibovici L. Effects of remote, retroactive intercessory prayer on outcomes in patients with bloodstream infection: randomised controlled trial. BMJ 2001;323:1450-1. PMID 11751349.

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=535973
(a bit general but it does cite Leibovici and gives a run down of the problems of such studies)

4) O'Laoire S. An experimental study of the effects of distant, intercessory prayer on self-esteem, anxiety, and depression. Altern Ther Health Med 1997;3:38-53. PMID 9375429.

This is a joke... according to this very study: " Improvement on all II measures was significantly related to subjects' conviction concerning whether they had been assigned to a control or an experimental group. Possible explanations include the placebo/faith effect, the time displaced effect, and extraneous prayer."
http://www.ncbi.nlm.nih.gov/entrez/...ve&db=PubMed&list_uids=98042999&dopt=Abstract

5) Aviles JM, Whelan SE, Hernke DA, Williams BA, Kenny KE, O'Fallon WM, Kopecky SL. Intercessory prayer and cardiovascular disease progression in a coronary care unit population: a randomized controlled trial. Mayo Clin Proc 2001;76:1192-8. PMID 11761499.

Another joke?
From Consumer Health Digest (December 17th, 2001)
Intercessory prayer flunks another test. Mayo Clinic researchers have found no significant effect of intercessory prayer (prayer by one or more persons on behalf of another) on the medical outcomes of more than 750 patients who were followed for 6 months after discharge from in hospital coronary care unit. The patients were randomized within 24 hours of discharge into a prayed-for group and a control group. The prayer involved at least one session per week for 26 weeks by five randomly assigned individual or group intercessors. [Aviles JM and others. Intercessory prayer and cardiovascular disease progression in a coronary care unit population: A randomized controlled trial. Mayo Clinic Proceedings 26:1192-19198, 2001] http://www.mayo.edu/proceedings/2001/dec/7612a1.pdf
http://www.valleyskeptic.com/pray.htm

~Raithere

(edit: in the future, I will remember to read properly.)
 
Last edited:
cool skill said:
Atheists have yet to prove that God/Jesus does not exist.

Cool skill has yet to prove the Flying Spagetti Monster does not exist.
 
cool skill said:
Atheists have yet to prove that God/Jesus does not exist.
Actually, I know Jesus exists. He was mowing my lawn on Saturday.

~Raithere
 
Atheists have yet to prove that God/Jesus does not exist.
Why bother? Zoologists have yet to prove Bigfoot doesn't exist.

Actually, I know Jesus exists. He was mowing my lawn on Saturday.
I was wondering why his brother Juan showed up instead of him the other day. I was afraid he got deported.
 
Raithere said:
Sorry, but yes it is. There are so many problems with these studies that they are pretty much garbage to begin with and even those that return some result the effect is so marginal (and often not the objective when the study began) that they are worthless.

Not so Raithere. The results are significant in 3 of the studies (not marginal as you suggest). Of course skepdik.com and www.skeptics.com would be critical of results showing an effect of DH (distant healing), just as 'Noetic Sciences Review' are going to be supportive. However, their criticism of the Sicher and Targ experiment seems to be that the significant results were not where the researchers expected from their initial study i.e. mortality, but rather symptoms relating to the progress of the disease. The Leibovich experiment seems to by criticised only on the grounds that there is no known mechanism, even by Bishop and Stenger, who have clearly set out to be critical of the findings.

So, the results are not clear cut (unless you believe everything on skepdic.com or www.skeptics.com). While I'm uncomfortable about many of the difficulties of researching something like "the effectiveness of prayer", these results seem to indicate a possible correlation and that more research is needed. To dismiss them at the current time is premature, and merely indicative of science by presumption.

4) O'Laoire S. An experimental study of the effects of distant, intercessory prayer on self-esteem, anxiety, and depression. Altern Ther Health Med 1997;3:38-53. PMID 9375429.

This is a joke... according to this very study: " Improvement on all II measures was significantly related to subjects' conviction concerning whether they had been assigned to a control or an experimental group. Possible explanations include the placebo/faith effect, the time displaced effect, and extraneous prayer."

5) Aviles JM, Whelan SE, Hernke DA, Williams BA, Kenny KE, O'Fallon WM, Kopecky SL. Intercessory prayer and cardiovascular disease progression in a coronary care unit population: a randomized controlled trial. Mayo Clin Proc 2001;76:1192-8. PMID 11761499. Another joke?

~Raithere
(edit: in the future, I will remember to read properly.)

Errr.... as I think you realised in your edit, as I stated in my last post, the last 2 studies did not demonstrate a correlation with distant healing (at least that couldn't be distinguished from placebo effect). One strives to be objective - the overall results are very mixed!
 
Diogenes' Dog said:
Not so Raithere. The results are significant in 3 of the studies (not marginal as you suggest). Of course skepdik.com and www.skeptics.com would be critical of results showing an effect of DH (distant healing), just as 'Noetic Sciences Review' are going to be supportive. However, their criticism of the Sicher and Targ experiment seems to be that the significant results were not where the researchers expected from their initial study i.e. mortality, but rather symptoms relating to the progress of the disease. The Leibovich experiment seems to by criticised only on the grounds that there is no known mechanism, even by Bishop and Stenger, who have clearly set out to be critical of the findings.

So, the results are not clear cut (unless you believe everything on skepdic.com or www.skeptics.com). While I'm uncomfortable about many of the difficulties of researching something like "the effectiveness of prayer", these results seem to indicate a possible correlation and that more research is needed. To dismiss them at the current time is premature, and merely indicative of science by presumption.

No, the studies really are pretty much worthless at this point. This doesn't mean that people who are interested shouldn't continue to pursue their research, merely that the studies thus far aren't indicative of any positive effect.

For instance in the Sicher and Targ study there are two main problems:
First, the sample population was far too small. Chance factors could swing the outcome in either direction.
Second, what they published was not what they were looking for. You cannot setup a study to research one thing and when the study fails change the goals and go back through your data cherry-picking your targets. This isn't even science.

Additionally, and this is a factor in all such studies, how do you eliminate the variables for your control group? That is, if we take 1000 patients and set up so that you have someone pray for the first group of 500 and not the other, how do you know whether other people outside the study are praying for them or not. Most people are religious. Most religious people pray for themselves, their friends, and their families when they are ill.

So how can you know if the 500 you are not praying for don't have 5 other people praying for them? Do additional prayers create a stronger effect? If people are praying for the subject to different gods do they cancel each-other out? What if someone prays for the ill-health of the subject? What about people who just think about the subject and hope they get better?

Finally, even if we assume that some of these results are real, how do we factor out explanations other than God and prayer? How do we tell whether God is the causative agent or if it's the Devil? Or maybe the person praying has ESP and there is no God. Or maybe aliens are listening in to our thoughts with a brain scanner and affecting the outcome via remote treatment of the patients.

Unable to filter these things out the very most you could state from such a study is that you found a correlation. Thus far, even assuming the studies are valid, this effect is miniscule. Certainly nothing to get excited over and I sure as fuck would not approve taking grant monies away from real research.

~Raithere
 
Raithere said:
No, the studies really are pretty much worthless at this point....
For instance in the Sicher and Targ study there are two main problems:
First, the sample population was far too small. Chance factors could swing the outcome in either direction.

Not so Raithere! The measure of standard deviation (P) takes sample size into account. The experiment used 40 patients, and showed P=0.01 to P=0.04 for various factors e.g. hospitalisation. Obviously, had Targ lived longer, a larger study would have been conducted. However, the results are significant.

Raithere said:
Second, what they published was not what they were looking for. You cannot setup a study to research one thing and when the study fails change the goals and go back through your data cherry-picking your targets. This isn't even science.

This has been hugely exaggerated by known atheists like Stenger. The researchers started out to measure mortality, however, it is well within the 'rules' of research that as there were very few deaths, they should look at the next best set of indicators (which they had measured) i.e. symptoms of the progress of the disease and episodes requiring hospitalisation. This is not cherry-picking, and is not sufficient reason for dismissing their results.

Raithere said:
Additionally, and this is a factor in all such studies, how do you eliminate the variables for your control group? That is, if we take 1000 patients and set up so that you have someone pray for the first group of 500 and not the other, how do you know whether other people outside the study are praying for them or not. Most people are religious. Most religious people pray for themselves, their friends, and their families when they are ill.

Here I would agree with you. I too feel uncomfortable with the uncontrollablity of such research. However, the factors you mention are more likely to lead to false negative, than to false positive results.
 
Diogenes' Dog said:
Not so Raithere! The measure of standard deviation (P) takes sample size into account. The experiment used 40 patients, and showed P=0.01 to P=0.04 for various factors e.g. hospitalisation. Obviously, had Targ lived longer, a larger study would have been conducted. However, the results are significant.
And if I flip a coin 40 times and get 22 heads that's statistically significant too... but not unlikely.
And it's hardly enough to support a claim that I can mentally manipulate the results of a coin flip.

Further, if I flip 40 coins with the assertion that I can mentally make them stand on edge and none do, I can't then go looking to see how many heads vs tails there are or how many rolled off the table, or how many bounced three times instead of two and claim any deviation I can find as a success.

This has been hugely exaggerated by known atheists like Stenger. The researchers started out to measure mortality, however, it is well within the 'rules' of research that as there were very few deaths, they should look at the next best set of indicators (which they had measured) i.e. symptoms of the progress of the disease and episodes requiring hospitalisation. This is not cherry-picking, and is not sufficient reason for dismissing their results.
You are wrong and yes it is. You cannot un-blind the data and just start searching for something, anything, that might justify hypothesis.

"Citing for his information the project's biostatistician and co-author Dan Moore, physicist Mark Comings (Targ's husband, whom she married just two months before her death), and commentary from senior author Fred Sicher, Bronson reports that "her study had been unblinded and then 'reblinded' to scour for data that confirmed the thesis-and the Western Journal of Medicine did not know this fact. . . ." The data were fudged-seriously fudged-and this is, as we will see, a kind word for what she and her team did. The original aim was to measure mortality, which failed badly because of the anti-AIDS drugs that had become available. So the data were unblinded and sifted several times to look for positive results. Bronson reports that Targ (encouraged by her father, alleged psychic Russell Targ), and Sicher, a strong believer in distant healing, ordered Moore to search and re-search the data to find results that seemed statistically significant. The account of how this was done reveals shameful violations of scientific procedure (Bronson 2002, 222).
http://www.csicop.org/sb/2003-09/magical-death.html

However, the factors you mention are more likely to lead to false negative, than to false positive results.
Depending upon what really did work and what didn't you might see anything at all... a positive result is not more likely.

The point however, is not that you might get a false positive. The point is that you have no idea what you're measuring.

~Raithere
 
Raithere said:
And if I flip a coin 40 times and get 22 heads that's statistically significant too... but not unlikely. And it's hardly enough to support a claim that I can mentally manipulate the results of a coin flip.
The statistical significance of getting 22 heads on 40 flips, would be somewhere around P>0.9 (insignificant) which is far removed from the value needed to 'support a claim' and indicate a statistically significant result i.e. P<0.05!

Raithere said:
You are wrong and yes it is. You cannot un-blind the data and just start searching for something, anything, that might justify hypothesis.
I don't believe that was done. The original paper gives a comprehensive list of the measured variables, and a clear statement of the protocol used which includes baseline measurements taken at enrollment, 10 weeks, 22-24 weeks and 6 months.

The quotes you give are originally from the journalist Po Bronson's article"A Prayer Before Dying" in Wired (Dec 2002) in which he alleges:
Targ asked him to crunch the numbers on the secondary scores - one a measure of HIV physical symptoms, the other a measure of quality of life. These came out inconclusive; the treatment group didn't score better than the control. Not what they wanted to find. In dismay, Targ called her father. He calmed her down, told her to keep looking. She had Moore run the mood state scores. These came out worse - the treatment group was in more psychological stress than the control group.
Now, if you go back to the original Sicher & Targ paper, this is clearly not the case...
Treated subjects also showed significantly improved mood compared with controls (Profile of Mood States score -26 versus 14, P= 0.02).

Similarly, where Bronson states that the Moore was advised after the trial period about measuring opportunistic infections....

It defined the 23 illnesses associated with AIDS. He told Moore they ought to have been measuring the occurrence of these illnesses all along. Moore took this list to Targ and Sicher. There was only one problem. They hadn't collected this data.

This is contradicted by the Sicher and Targ paper which clearly states that patients were assigned to the treatment or control groups on the basis of their BHS infection score... something that could surely only be done at the start of the trial!?

To control for the variation in severity and prognosis of different AIDS-related illnesses, all illnesses were scored according to the Boston Health Study (BHS) Opportunistic Disease Score, which includes both AIDS-defining and secondary AIDS-related diseases. [...]
These three variables were used to form matched subject pairs. First, a normalized z score was computed for each subject for each variable by subtracting the mean for all subjects and dividing the result by the standard deviation for all subjects. Next, all pairwise sums-of-squared differences in z scores between subjects (over the three variables) were computed. For each subject, an average difference from all the other subjects was calculated. Starting with the subject with the largest average difference, the closest match was found. The two matched subjects were eliminated from the list and the procedure was iterated until all 40 subjects were paired. A computer-generated binary random number was then used to randomly assign one member of each pair to treatment and one to control.
So, I have good reason to doubt Bronson's allegations, given that what he says that can be checked seems to be inaccurate. It makes for a good story though! :rolleyes:

It was unfortunate for this trial (fortunate for the patients) that triple-therapy came on the scene and significantly reduced HIV mortality just after the trial started. However, even if you discount the ADDs - the significantly reduced hospitalisation rate, lower length of stay, improved mood score and reduced doctor call-out in the treatment group is significant and would indicate the need for further trials!

Raithere said:
Depending upon what really did work and what didn't you might see anything at all... a positive result is not more likely. The point however, is not that you might get a false positive.
False negative results (from type II errors) are more likely to occur i.e. the effect of controls being prayed for by relatives would disguise any actual effect resulting from the 'experimental' prayers. This could be significant in e.g. the Mayo Clinic trial where no significant results were observed.
Raithere said:
The point is that you have no idea what you're measuring.
I agree. There is a huge difficulty in selecting what variables to measure, as we don't know the mode of action, or the conditions by which prayer might 'work'!
 
(Q) said:
Cool skill has yet to prove the Flying Spagetti Monster does not exist.
Unless the Flying Spagetti Monster is God creator of the universe, I don't have to prove scrap.
 
SkinWalker said:
Why bother? Zoologists have yet to prove Bigfoot doesn't exist.

I was wondering why his brother Juan showed up instead of him the other day. I was afraid he got deported.
What a surprise. Another blindfaith fanatic is trying to compare apples and orangutangs in his pitiful attempt to disprove that Jesus, the creator of the universe, exists.
 
cool skill said:
Unless the Flying Spagetti Monster is God creator of the universe, I don't have to prove scrap.

Funny, up to now, I've never seen anyone actually miss their own point.
 
I once saw a pitiful fanatic prove that Vampires were not real… :confused:
Pathetic.

Imagine the audacity of someone not believing in what you cannot disprove.

We all know Jesus created the universe, so why are we in denial?
Are we afraid of His retribution?

Cool skill isn’t retarded, he just thinks that way.
His opinion is just as good as anyone else's.
Let us listen to him think......zzzzzzzzzzzzzzzzzzzzz

Thanks Scriforums!
 
Diogenes' Dog said:
The statistical significance of getting 22 heads on 40 flips, would be somewhere around P>0.9 (insignificant) which is far removed from the value needed to 'support a claim' and indicate a statistically significant result i.e. P<0.05!
Okay so lets say 25 or 30. The point is that a small group requires a much smaller variation. 0.05 as the point of significance for alpha is somewhat arbitrary anyway and is hardly what I would consider strong evidence but as it is conventional for this kind of study let's run with it.

Diogenes' Dog said:
I don't believe that was done. The original paper gives a comprehensive list of the measured variables, and a clear statement of the protocol used which includes baseline measurements taken at enrollment, 10 weeks, 22-24 weeks and 6 months.
Let's take a look. I'm using a copy of the report from this site:

http://www.pubmedcentral.gov/picrender.fcgi?artid=1305403&blobtype=pdf

"As a result, in the second larger study (reported here in full) a pairmatched design was used to control for factors shown to be associated with poorer prognosis in AIDS,5 specifically age, T cell count, and illness history."

"Additionally, an important intervening medical factor changed the endpoint in the study design. The pilot study was conducted before the introduction of "triple-drug therapy" (simultaneous use of a protease inhibitor and at least two antiretroviral drugs), which has been shown to have a significant effect on mortality.6 For the replication study (July 1996 through January 1997, shortly after widespread introduction of triple-drug therapy in San Francisco), differences in mortality were not expected and different endpoints were used in the study design."

"6. Hammer SM, Squires KE, Hughes MD, et al. A controlled trial of two nucleoside analogues plus indinivar in persons with human immunodeficiency virus infection and CD4 cell counts of 200 per cubic millimeter or less. N Engl J Med 1997; 337:725-733"

So their study concluded in January of 1997 and the study they reference in regards to changing their endpoint was published in September of 1997. Clearly after their study was concluded, which would seem to support Moore's account as reported by Bronson.

I also find it problematic that they assumed an effect only upon survival rate when the earliest studies of triple-drug therapy indicated primarily an effect upon disease progression. Even assuming, or going from earliest unpublished evidence, a decrease in mortality one would necessarily have to account for the effect of the new therapies upon associated illnesses and control for the variety of therapies and methods being used (no standardized protocol or best approach had been established).

http://www.centerforaids.org/rita/0805/overview.htm (General Info)

In fact, they bring it up and immediately dismiss it:

"First, differences between the group outcomes might be attributed to baseline medical or treatment differences. This possibility was not supported by univariate comparison of baseline AIDS-related variables, as shown in Table 1, where there were no statistically significant differences between the groups."
However their own data fails to account for any difference in treatment protocols, it merely indicates whether or not the patients were on triple-drug therapy or not and whether it was throughout the study or for more than 2 months. It says nothing about which combinations of drugs, when they were first administered, dosage levels, etc. The failure to account for such variables alone casts a broad shadow over any conclusion one might draw however problematic the methodology.



Diogenes' Dog said:
Now, if you go back to the original Sicher & Targ paper, this is clearly not the case...
Actually it is, once you account for the difference in the baseline POMS scores. They even say so:

"When baseline POMS was used as a covariate to adjust the POMS change scores, the difference in POMS change scores switched from statistical significance in favor of the treated to significance in favor of the controls."
Admittedly, Bronson could have been a bit clearer about it... but after all, it is Wired Magazine. But this does bring me back to my first point; adjust for a baseline discrepancy and "Whoops!" the results are reversed. Now what if we account for all of those "near significant" variables they shoved aside?

Diogenes' Dog said:
This is contradicted by the Sicher and Targ paper which clearly states that patients were assigned to the treatment or control groups on the basis of their BHS infection score... something that could surely only be done at the start of the trial!?
It's already being recorded... they could simply pull the data from the medical charts. In fact, that is exactly what they said they did:

"New ADDs were counted as "ADDs acquired" only if blind chart review revealed no prior diagnosis of the condition; the only exception to this rule was Kaposi's sarcoma."

More interesting, I find, is that they ranked illnesses according to the BHS, which is strange considering the BHS is used to predict survival rates which they supposedly were no longer looking at.

Diogenes' Dog said:
So, I have good reason to doubt Bronson's allegations, given that what he says that can be checked seems to be inaccurate. It makes for a good story though!
Seems to me Bronson is pretty much on track so far.

False negative results (from type II errors) are more likely to occur i.e. the effect of controls being prayed for by relatives would disguise any actual effect resulting from the 'experimental' prayers. This could be significant in e.g. the Mayo Clinic trial where no significant results were observed.
If so then yes but is that an assumption we can make? Further, as no causative pathway has even been defined even if we assume the veracity of their results how can one make claim to anything beyond a bare correlation? Interesting? Possibly... but worth another $1.5 million?

Particularly when the effect could be detrimental to the patients:

"Herbert Benson of Harvard and a brigade of faithful collaborators assigned three Christian prayer groups to pray for 1800 patients undergoing coronary artery bypass graft (CABG) surgery in six medical centers throughout the United States.
...
It found that patients undergoing CABG surgery did no better when prayed for by strangers at a distance to them (intercessory prayer) than those who received no prayers. But 59% of those patients who were told they were definitely being prayed for developed complications, compared with 52% of those who had been told it was just a possibility, a statistically significant, if theologically disappointing, result. Benson et al. came to the objective conclusion that "Intercessory prayer itself had no effect on complication-free recovery from CABG, but certainty of receiving intercessory prayer was associated with a higher incidence of complications.""

http://www.fasebj.org/cgi/content/full/20/9/1278

I guess whatever you do, don't tell anyone you are praying for them.

~Raithere
 
Raithere said:
Okay so lets say 25 or 30. The point is that a small group requires a much smaller variation. 0.05 as the point of significance for alpha is somewhat arbitrary anyway and is hardly what I would consider strong evidence but as it is conventional for this kind of study let's run with it.

As I said earlier, sample size is factored in (P is proportional to the square root of the sample size) i.e. for a larger sample, the same deviation becomes more significant. To get a similar P (~0.05) for throwing >28 heads with 40 coins requires >8 heads with 10 coins. A significance of P=0.05 seems to be the accepted cut-off for significance in these sorts of trials.

Raithere said:
So their study concluded in January of 1997 and the study they reference in regards to changing their endpoint was published in September of 1997. Clearly after their study was concluded, which would seem to support Moore's account as reported by Bronson.
Not really. They didn’t have to wait until the other paper was published to know that only 1 patient on their study had died. It would have made no sense to plan to rely on mortality rates alone as a measure of effectiveness.

Raithere said:
I also find it problematic that they assumed an effect only upon survival rate when the earliest studies of triple-drug therapy indicated primarily an effect upon disease progression. […] It says nothing about which combinations of drugs, when they were first administered, dosage levels, etc. The failure to account for such variables alone casts a broad shadow over any conclusion one might draw however problematic the methodology.

The use of triple therapy during the trial is undoubtedly a complicating factor. However, it is noteworthy that the control group score consistently higher for receipt of this therapy than the ‘treatment’ group. The CD4+ counts for both groups show no significant differences – both groups remain low. (They acknowledge that viral load tests would with hindsight be a more accurate measure of triple therapy effectiveness). One can conclude that triple therapy almost certainly did not bias the results in favour the ‘DH treatment’ group.

Raithere said:
Actually it is, once you account for the difference in the baseline POMS scores. They even say so:
They are understandably cautious about the POMS results. The initial analysis shows a significant improvement in POMS score in the treatment group over the control. The covariate analysis is a further attempt to control for the “effect of increased hope or expectation due to their participation in an intervention research study” and clearly produces strange results. They openly state all this in the discussion:

Detailed analysis of baseline variables differing at P < 0.20 did find that higher baseline POMS scores were associated with greater improvement in POMS scores over the course of the study. By chance, patients in the treatment group showed more psychological distress at baseline, so their improved mood over the study interval may represent simply an effect of increased hope or expectation due to their participation in an intervention research study. The additional finding that adjusting for differences in baseline POMS caused a change in the direction of the beneficial effect is difficult to understand and is likely due to chance.

However, Bronson’s reportage describes an atmosphere of near hysteria breaking out as the researchers allegedly find NO correlation in either the symptoms or the POMS scores.

Targ asked him to crunch the numbers on the secondary scores - one a measure of HIV physical symptoms, the other a measure of quality of life. These came out inconclusive; the treatment group didn't score better than the control. Not what they wanted to find. In dismay, Targ called her father. He calmed her down, told her to keep looking. She had Moore run the mood state scores. These came out worse - the treatment group was in more psychological stress than the control group.

This is clearly contradicted by looking at the actual results which show significant BHS and POMS improvements in the treatment vs. control groups. I therefore think his account (allegedly based on what Moore told him) owes much to Bronson’s journalistic imagination and desire for a scoop, and his allegations would certainly not stand up as the basis for a case in a court of law!

Raithere said:
More interesting, I find, is that they ranked illnesses according to the BHS, which is strange considering the BHS is used to predict survival rates which they supposedly were no longer looking at.
Not so strange when you consider that it was the best indicator of overall severity of AIDS – and therefore an ideal marker of the progress of the disease.

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1380834

Raithere said:
Seems to me Bronson is pretty much on track so far.
I don’t think so. There are several reasons I am dubious of Bronson’s account.

1) Why would Moore and Targ’s husband (Comings) tell a reporter (Bronson) how the recently deceased Targ, along with Sicher and Moore fiddled their research results?
2) There seems to be no corroboration by Moore, Comings or Sicher relating these accounts by any other reporter, or in any other media.
3) Bronsons statement that “Moore seemed unaware how explosive his version of the story was” seems unlikely given Bronson’s allegations and Moore’s background as a research statistician.
4) Bronsons description of Targ’s hysterical behaviour was not consistent with his earlier description of her cool “refusal to speculate” and her position of “Use the scientific method to find out if an effect exists before trying to analyse how it works”.
5) Bronson’s description of the Distance Healers as “The usual wackos” indicates a profound scepticism not to say contempt for this research.
6) A Bronson scoop debunking “the most scientifically rigorous” prayer research would be widely publicised and represented a very good career move for him personally.
7) His accusations and reporting of the conversations alleged to have taken place don’t sit easily with the details of what is in the paper. They certainly would not stand up as the basis for a legal case.

Raithere said:
Further, as no causative pathway has even been defined even if we assume the veracity of their results how can one make claim to anything beyond a bare correlation? Interesting? Possibly... but worth another $1.5 million? Particularly when the effect could be detrimental to the patients.

I return to my original point: Research results on prayer currently show very mixed results. I echo the conclusions reached by a number of recent meta-review papers including the following from http://www.annals.org/cgi/content/abstract/132/11/903

The methodologic limitations of several studies make it difficult to draw definitive conclusions about the efficacy of distant healing. However, given that approximately 57% of trials showed a positive treatment effect, the evidence thus far merits further study.

I'm don't know whether prayer (distant healing) of this kind works or is amenable to scientific scrutiny - certainly there are a huge number of difficulties and unknowns that make it very different to a drug trial. However I feel we should pursue such research with open-minded skepticism .
 
Back
Top