For the second time in a month, The Lancet is promoting a “simple” test that may help predict one’s risk of death. A few weeks ago it was the notorious death grip study, this time it’s a questionnaire that predicts your “Ubble age.”
The new test is based on Swedish researchers’ efforts to correlate an array of lifestyle and health factors with mortality in a massive epidemiological study of UK residents. They are making the test publicly available via an interactive website with the help of the UK organization Sense About Science, which tries to help citizens utilize scientific evidence in their daily lives. [Editor’s note: We could not log onto the death risk calculator website to evaluate it despite multiple attempts as of this writing.]
While these tests are never as “simple” as they’re made out to be, we’re happy to note significant improvement in this most recent release compared with its predecessor. The current release does a pretty thorough job of explaining the study and quotes from an accompanying editorial to provide some perspective. The editorial notes that it’s not clear how individuals may react to such estimates of their death risk. Is this information motivating to people or not? And if it is motivating, is it healthy motivation that leads to actions that decrease future risk, or does it engender a sense of hopelessness that leads the person to take on even more unhealthy behaviors resulting in even higher risk? We applaud the release for calling attention to this important unanswered question. But we do wonder if more thought should have been given to the answer before trumpeting the results to the world and making the test freely accessible to everyone.
Predicting what the future may hold is something that is intriguing to everyone. And what is more important to predict than whether you will live or die? So this algorithm that provides an estimate of one’s risk of death in 5 years is appealing at many levels. However, there are several important caveats — some addressed in this news release and some not. The biggest missing piece is whether the information is accurate for an individual. While the study reports reasonable population accuracy (~80%), there is some mention that it varies. For example, the test does a better job for younger individuals than for older ones. However, there is no information provided about how accurate the individual’s estimate is. Without such information, individuals should be very cautious about interpreting data that may be accurate only at a population level until more information is available for their particular situation.
Access to an interactive site for purposes of computing a score predicting one’s risk of dying will be free. However, this should have been stated in the release. We can’t rate the release Satisfactory if there’s no mention of cost at all, so we’ll rate it Not Applicable.
Very little about this news release is quantitative. The document repeatedly lauds the benefits of predicting 5-year mortality via self-reports rather than more intrusive physical tests, and it touts the potential use of the “Ubble age” score to assist doctors in identifying “high risk” patients and to improve individuals’ self-awareness. But no studies have been conducted to test effects on either of these audiences, a concern which the release refers to only at the end of the document. The release could have included that information higher up in the release and emphasized it more.
The release does mention that the algorithm was validated and shown to have an ~80% accuracy. But as noted above, this is a population level estimate. The overall test characteristics of the model do not translate into an interpretation that is meaningful for an individual. One could argue that this is mainly the fault of the original study rather than the release, as we could find no data in the paper that provides information that an individual could use to determine how accurate their particular calculation was. However, this problem contributes to an overall sense that the release is weighted too far in the direction of benefits without enough attention to potential harms and uncertainties.
The text does offer a brief cautionary note when it quotes from the commentary of a couple of UK scientists in The Lancet issue to the effect that “whether this will help individuals improve self-awareness of their health status…or only lead to so-called cyberchondria, is a moot point.” Cyberchondria refers to unfounded anxiety concerning the state of one’s health brought on by visiting health and medical websites, so that term seems apt here. And “moot” conveys the uncertainty surrounding outcomes as, again, the study can offer no evidence. We’ll give a Satisfactory rating, though we wonder if it would have been more effective to say, “it is unclear whether use of this information will lead to actions that will help or harm individuals.”
The data used came from a gargantuan epidemiological study involving nearly half a million UK respondents. The news release offers some details about that study, including the number of possible predictors (655) and the nature of the analysis. The text also contains a caveat reminding the reader that prediction is not the same as causation (a variant on the “correlation is not causation” theme), as well as a quote from one of the investigators that cautions against seeing one’s score as a “deterministic prediction.” These are important qualifiers, but they occur lower in the press release, and it is not clear how well those points will get picked up in subsequent stories.
We didn’t see anything that implied a doomsday perspective to use of this data. The release also clearly implies this is for UK citizens and not for others. So we’ll award a Satisfactory rating, although we’d note again that there’s an aspect of this test that could possibly lead to increased anxiety and “cyberchondria,” as the release puts it.
Funders are clearly identified in the notes at the bottom of the press release. The investigators assert the absence of conflicts of interest in the journal article itself.
The study and its press release make much of the potential advantage of predicting death via self-report questions rather than biological tests. So the text makes an overt comparison, admittedly without data.
The release could also have included the comparison to doing nothing. Whether providing any risk information (this version or another) or no risk information makes a difference is a legitimate question.
Notes at the end of the news release speak to the availability of the interactive “Ubble” website once the research article is published. We could not access the calculator, perhaps due to high traffic levels accompanying the rollout.
The news release is explicit about the novelty of the approach to prediction, as well as regarding the scale of the study itself. We suspect it is correct in this assessment, as the epidemiological study on which the work is based is truly huge.
One of the investigators refers to the study’s development of a prediction score from self-report data as “exciting,” but most of the quotes, while positive, also convey useful information, including qualifiers.
Comments (1)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Laurence Alter
June 16, 2015 at 3:21 amDear Staff:
Thank you for reviewing such top-tier publications like “The Lancet”; you might also consider “Science” and “Nature” [United Kingdom journals] when they have applicable medical articles.
Respectfully,
Laurence
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like