This news release from Proove Biosciences Inc. reports on a test designed to identify those at risk of becoming addicted to opioids so doctors can improve their prescribing practices. The release summarizes a study showing that the company’s algorithm, which combines genetic markers with lifestyle and behavior variables, accurately distinguishes between healthy patients with no history of opioid abuse and patients receiving opioid addiction treatment. The study, however, may be comparing people with opioid use disorder with the wrong control group, given that a more useful distinction would be between those who have become addicted and those who have used opioids in similar circumstances without becoming addicted. In addition, the news release fails to provide information about the study’s funding source, nor does it note that four of the study’s six authors work for Proove.
According to the Centers for Disease Control and Prevention (CDC), opioid overdose killed more than 33,000 Americans in 2015. Almost half of those who died were using a prescription opioid, although not necessarily one that had been prescribed to them. Many of those who become addicted to prescription opioids or heroin develop those addictions after being prescribed opioid painkillers. Reducing the number of deaths caused by opioid abuse has emerged as a national health priority, although policy-makers have not been able to address how to go about it — much less fund an opioid abuse reduction program. An inexpensive and reliable test that could accurately predict one’s risk of becoming addicted to opioids could be a welcome development. However, this tool and others like it have been criticized for both reliability and questionable marketing practices.
The news release provides no information about how much the Proove Opioid Risk (POR) system costs. Presumably, this system would be used by health care providers to determine whether it was safe to prescribe opioids to specific patients. However, such providers would have no way of knowing, based on this news release, how much of an investment this would require.
The news release states that the algorithm correctly identified patients with a history of opioid misuse nearly 97 percent of the time, rarely mis-classifying the healthy controls as being at high or even moderate risk of opioid misuse. One question that remains unanswered in the release is whether the POR test would successfully distinguish people with opioid use disorder from individuals who have used opioids without becoming addicted.
Potential harms from the test are not explained. The primary medical risks from the use of such a test would stem from inaccurate identification of an individual’s risk of addiction. This could lead to individuals identified as being at low risk potentially being less careful about their use of opioids, leading to addiction; conversely, inaccuracy could identify an individual as being at higher risk when, in fact, they were at low risk and would benefit from being prescribed opioids.
The release provides a basic outline of the study but leaves unanswered questions.
Our main concern here is whether the comparison groups included in the study represented the most relevant groups. In this case, the POR algorithm was used to distinguish between people with opioid use disorder at an addiction treatment facility and “healthy patients with no history of opioid use.” In fact, the control group was limited to individuals who did not smoke, had no personal or family history of mental illness, no pain, and no previous substance abuse history. It would seem that a far more appropriate comparison would have used a control group populated by individuals who had previously used opioids but had not misused or developed an addiction to these drugs. There’s also a problem with comparing non-smokers to smokers, people in a addiction treatment program to those not receiving treatment and the lack of blinding.
Another concern is that while 95 percent of the opioid addiction patients were white, only 62 percent of the healthy controls were; this could make a significant difference in the value of the use of the POR algorithm for non-white patients. Research shows that ethnicity is strongly associated with genetic risks, including the risk for opioid addiction. Finally, the classifications of the subjects is suspect: they were designated as having the diagnosis of opioid use disorder using a non-standard definition of the disorder and by a single researcher without replication or validation, likely by one of the authors, but this is not spelled out in the release or the research study.
Opioid use disorder is a very serious public health issue, and there is evidence that many of those who become addicted to opioids first use them as legitimate prescribed medicines. Thus, the news release is not guilty of disease mongering.
No information is provided about who provided the funding for the study. The news release quotes three of the study’s authors, including Dr. Ashley Brenton, who is identified as the “Associate Director of R&D for Proove.” The two other authors quoted do not appear to work for Proove. The journal article on which the new release is based also omitted information about the funding source for the study.
The news release mentions no alternatives to the use of this algorithm. Most doctors already ask patients about their substance use and abuse history and other important lifestyle and environmental factors before prescribing opioids to them. In addition, in March 2016, the CDC issued a “Guideline for Prescribing Opioids for Chronic Pain,” intended to help doctors decide which of their patients truly need opioids and when to use alternative treatments to manage patients’ pain.
Another company, Canterbury Healthcare, is marketing a similar genetic test for opioid addiction susceptibility.
The news release quotes one of the study authors as saying he has used the “technology” for six years, suggesting that it should be available. However, the news release does not provide information about any use of the POR algorithm by health professionals not affiliated with Proove or this specific study. The release neglects to mention whether the FDA has cleared the device for marketing.
Rather than claiming novelty the release states that the study validates previous research. The release quotes a company official: “This validation study builds on the peer-reviewed evidence supporting Proove Opioid Risk® and its components as an optimal model to predict opioid abuse risk.”
The study itself does appear to provide some missing data on the predictive value of the proprietary genetic test algorithm, but the test itself is not new.
The news release avoids using unjustifiable or sensational language. It provides a little background on the factors that contribute to substance abuse.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like