This news release from Proove Biosciences Inc. reports on a test designed to identify those at risk of becoming addicted to opioids so doctors can improve their prescribing practices. The release summarizes a study showing that the company’s algorithm, which combines genetic markers with lifestyle and behavior variables, accurately distinguishes between healthy patients with no history of opioid abuse and patients receiving opioid addiction treatment. The study, however, may be comparing people with opioid use disorder with the wrong control group, given that a more useful distinction would be between those who have become addicted and those who have used opioids in similar circumstances without becoming addicted. In addition, the news release fails to provide information about the study’s funding source, nor does it note that four of the study’s six authors work for Proove.
According to the Centers for Disease Control and Prevention (CDC), opioid overdose killed more than 33,000 Americans in 2015. Almost half of those who died were using a prescription opioid, although not necessarily one that had been prescribed to them. Many of those who become addicted to prescription opioids or heroin develop those addictions after being prescribed opioid painkillers. Reducing the number of deaths caused by opioid abuse has emerged as a national health priority, although policy-makers have not been able to address how to go about it — much less fund an opioid abuse reduction program. An inexpensive and reliable test that could accurately predict one’s risk of becoming addicted to opioids could be a welcome development. However, this tool and others like it have been criticized for both reliability and questionable marketing practices.
The news release provides no information about how much the Proove Opioid Risk (POR) system costs. Presumably, this system would be used by health care providers to determine whether it was safe to prescribe opioids to specific patients. However, such providers would have no way of knowing, based on this news release, how much of an investment this would require.
The news release states that the algorithm correctly identified patients with a history of opioid misuse nearly 97 percent of the time, rarely mis-classifying the healthy controls as being at high or even moderate risk of opioid misuse. One question that remains unanswered in the release is whether the POR test would successfully distinguish people with opioid use disorder
from individuals who have used opioids without becoming addicted.
Potential harms from the test are not explained. The primary medical risks from the use of such a test would stem from inaccurate identification of an individual’s risk of addiction. This could lead to individuals identified as being at low risk potentially being less careful about their use of opioids, leading to addiction; conversely, inaccuracy could identify an individual as being at higher risk when, in fact, they were at low risk and would benefit from being prescribed opioids.
The release provides a basic outline of the study but leaves unanswered questions.
Our main concern here is whether the comparison groups included in the study represented the most relevant groups. In this case, the POR algorithm was used to distinguish between people with opioid use disorder at an addiction treatment facility and “healthy patients with no history of opioid use.” In fact, the control group was limited to individuals who did not smoke, had no personal or family history of mental illness, no pain, and no previous substance abuse history. It would seem that a far more appropriate comparison would have used a control group populated by individuals who had previously used opioids but had not misused or developed an addiction to these drugs. There’s also a problem with comparing non-smokers to smokers, people in a addiction treatment program to those not receiving treatment and the lack of blinding.
Another concern is that while 95 percent of the opioid addiction patients were white, only 62 percent of the healthy controls were; this could make a significant difference in the value of the use of the POR algorithm for non-white patients. Research shows that ethnicity is strongly associated with genetic risks, including the risk for opioid addiction. Finally, the classifications of the subjects is suspect: they were designated as having the diagnosis of opioid use disorder using a non-standard definition of the disorder and by a single researcher without replication or validation, likely by one of the authors, but this is not spelled out in the release or the research study.
Opioid use disorder is a very serious public health issue, and there is evidence that many of those who become addicted to opioids first use them as legitimate prescribed medicines. Thus, the news release is not guilty of disease mongering.
No information is provided about who provided the funding for the study. The news release quotes three of the study’s authors, including Dr. Ashley Brenton, who is identified as the “Associate Director of R&D for Proove.” The two other authors quoted do not appear to work for Proove. The journal article on which the new release is based also omitted information about the funding source for the study.
The news release mentions no alternatives to the use of this algorithm. Most doctors already ask patients about their substance use and abuse history and other important lifestyle and environmental factors before prescribing opioids to them. In addition, in March 2016, the CDC issued a “Guideline for Prescribing Opioids for Chronic Pain,” intended to help doctors decide which of their patients truly need opioids and when to use alternative treatments to manage patients’ pain.
Another company, Canterbury Healthcare, is marketing a similar genetic test for opioid addiction susceptibility.
The news release quotes one of the study authors as saying he has used the “technology” for six years, suggesting that it should be available. However, the news release does not provide information about any use of the POR algorithm by health professionals not affiliated with Proove or this specific study. The release neglects to mention whether the FDA has cleared the device for marketing.
Rather than claiming novelty the release states that the study validates previous research. The release quotes a company official: “This validation study builds on the peer-reviewed evidence supporting Proove Opioid Risk® and its components as an optimal model to predict opioid abuse risk.”
The study itself does appear to provide some missing data on the predictive value of the proprietary genetic test algorithm, but the test itself is not new.
The news release avoids using unjustifiable or sensational language. It provides a little background on the factors that contribute to substance abuse.