This Wall Street Journal article describes results of an unpublished, non-peer-reviewed study of a gene-based test designed to help physicians prescribe antidepressants in people with moderate-to-severe depression who have failed on at least one of the drugs.
The story, both in the narrative and in some quotes, adds some moderating context and healthy skepticism to the enthusiastic “landmark” claims from a company release. It also is clear about the cost ($1,500) of the test offered by Myriad Genetics. But it needed to examine more closely the fact that the test can’t really tell a psychiatrist or other physician which of the dozens of antidepressants may work best–only which are most likely to cause problems and should be avoided. If this is the case, what’s driving the (small) benefit? Is it because people whose treatment was guided by genetic testing experienced fewer side effects and therefore were more likely to adhere to treatment? We’re not told.
Federally-funded surveys of mental health in the U.S. consistently estimate that around 15 to 20 million adult Americans have moderate-to-severe depression that warrants treatment. The good news is that there are dozens of antidepressant drugs approved by the FDA for use alone or with other, non-drug therapies. The bad news is that, as the WSJ article underscores, finding the right medicine for any given patient is largely a matter of highly subjective clinical judgment and trial and error, sometimes for years. Moreover, the drugs are often not cheap and may cause side effects. Thus, the application of gene-based “precision medicine” testing — in this case to test for a dozen or so genes that may help predict how classes of drugs are metabolized and used by the body and brain — is potentially big news.
The story makes clear that the test is expensive and that Medicare and some other insurance plans cover it.
Although the article provides data about the total number of people randomized to the two arms of the study, it does not go beyond what the news release offers about the percentage/relative increases in remission of symptoms and response to drugs. No absolute numbers are given, so it’s impossible to know how many of the patients in each arm actually benefited and to what extent. All it provides is the relative improvement in “response” to medication: “researchers found a 30% greater response to the medicine when the test was applied.” No data is provided for remission.
But when you look at the raw numbers regarding remission rates, and compare the results of both groups, they don’t look as impressive: Of those who didn’t receive genetic testing, 10% reached remission. With the tests, it was 15%. This gives the reader perspective that the absolute difference was 5%.
Also, the news release hints that at least some of the results might not have been statistically significant, and we’re curious about that and think it should have been explained in the story:
“The GeneSight-treated cohort also demonstrated higher symptom improvement which approached statistical significance (Chart 1).”
The test itself — using DNA taken from a cheek swab — carries no physical risks.
But the story needed more detail about the potential downsides. As noted in more detail in the TIME coverage of the story, the various “scales” that are used to assess drugs aren’t organized by how effective the drugs might be based on genetics, but instead how many adverse events they might cause. So, a person is told to take certain drugs not because they’ll be more effective than other drugs, they’re just less likely to cause problems.
Could selecting drugs based on how many problems they cause lead some people to choose drugs that aren’t as effective for treating depression? That would seem to be biggest issue in terms of harms — that the test would simply be wrong for some people.
Also, the test appears to show results for drugs beyond antidepressants — including some drugs that are known to cause dependency or can be deadly when taken alone, mixed together or with other drugs, such as hypnotics and benzodiazepines (Ambien and Xanax, for example). This is important to consider in a group of people already at higher risk for suicide.
All of this means adverse event data is vital — was it indeed lower in the gene-tested group?
If adverse event data were not available from the company or the investigators at the University of Michigan who led the study, the article could have noted the fact.
The story did not do enough to establish that this research is still preliminary — it has not been published nor peer-reviewed, and everyone is relying on just a little bit of data released by the company. We should all be skeptical at this point.
No disease mongering here; moderate to severe depression is a real and prevalent disorder and danger. We also thought it was wise how the story included a patient narrative with unclear results–she’s feeling better, but not 100%. That is a very powerful way to explain that this new tool isn’t a silver bullet.
The story quotes outside experts and notes that the principal investigator is an unpaid consultant to the company that makes the test. We’re also told that the study was funded by the maker of the genetic test.
Although quantification is mostly missing, the article properly points out that clinical judgment-based treatment alone misses the mark a lot of the time, and is essentially educated guesswork. The article might have been strengthened even more had it given information about alternatives or adjuncts to drugs, such as behavioral therapy and electroshock therapy.
It’s clear from the story that the test, marketed as GeneSight, is available.
The story is pretty careful to put the findings of the study — presented at a conference in New York — in the context of overall advances in the use of pharmacogenetics and precision medicine, treatments increasingly tailored to an individual’s genetic makeup. And it notes that the Myriad Genetics test is not the only player in the gene-based medicine field. The story also includes the information that the study was “blinded” and randomized, but (happily) avoids the hyperbole of the news release which calls it a “landmark” study and the largest ever of its kind.
As noted above, the article quotes outside sources.