The ScienceInsider blog of Science magazine reports:
When the U.S. Army and its collaborators in Thailand announced at press conferences on 24 September that a large clinical trial of an AIDS vaccine had lowered the rate of new HIV infections by about one-third, researchers were surprised and encouraged. Although it was only a modest reduction, it was the first positive result from any AIDS vaccine trial.
Now some researchers who have seen more of the data in confidential briefings are complaining that a fuller analysis undermines even cautious claims of success, and they are raising questions about the way the results were announced.
The press conference and press releases discussed an analysis that included all 16,000 people who participated in the trial, except for seven who were infected before receiving any doses of the two vaccines that were used in combination. Seventy-four people in the placebo arm of the study became infected with HIV, while the similarly sized vaccinated group only had 51 infections–a 31.2% efficacy. The analysis indicated that there was about a 96% level of confidence that the effect was real and not due to chance–just above the 95% cutoff that is widely used as a measure of statistical significance.
In the private briefings, researchers learned that a second analysis, which is usually performed in vaccine studies and was part of the Thai study’s design, also found that vaccine recipients had fewer infections, but the reduction was not statistically significant and the level of efficacy was slightly lower. This analysis eliminated people in both groups who did not rigorously follow the protocols. “Anything that really works, you’ll have enough robustness in results to be significant with both analyses,” says Douglas Richman, an AIDS researcher at the University of California, San Diego, a longtime critic of the study. Richman did not discuss the specific results with Science.
“The press conference was not a scholarly, rigorously honest presentation,” said one leading HIV/AIDS investigator, who like others asked that his name not be used. “It doesn’t meet the standards that have been set for other trials, and it doesn’t fully present the borderline results. It’s wrong.” Two biostatisticians who specialize in HIV prevention trials and have not seen the data, said that the results from all participants are the more important data, but they were puzzled that the press conference did not include the analysis that excluded those who didn’t follow the protocols. “I think if people saw [the two analyses] diverging in a vaccine study, they’d have a lot of questions,” says David Glidden, a biostatician at the University of California, San Francisco.
Comments (2)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Paul Alper
October 10, 2009 at 10:02 amThe WSJ of today gives an excellent analysis of the situation but leaves out some important numbers. I calculated the p-value for the original disclosure and here are the Minitab results
MTB > PTwo 8197 51 8198 74.
Test and CI for Two Proportions
Sample X N Sample p
1 51 8197 0.006222
2 74 8198 0.009027
Difference = p (1) – p (2)
Estimate for difference: -0.00280480
95% CI for difference: (-0.00546736, -0.000142249)
Test for difference = 0 (vs not = 0): Z = -2.06 P-Value = 0.039
Fisher’s exact test: P-Value = 0.048
The “true” p-value is closer to 5%, not 4% as indicated in the article.
Moreover, further analysis is hindered because I had to guess what to put in for “Per Protocol analysis” (last column in WSJ)where I split the 86 into 36 and 50 (to comply with the “efficacy” stated). Here is what I calculated:
Test and CI for Two Proportions
Sample X N Sample p
1 36 8197 0.004392
2 50 8198 0.006099
Difference = p (1) – p (2)
Estimate for difference: -0.00170720
95% CI for difference: (-0.00391845, 0.000504059)
Test for difference = 0 (vs not = 0): Z = -1.51 P-Value = 0.130
Fisher’s exact test: P-Value = 0.159
which is what the WSJ has.
In summary, your suspicions are well-founded.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like