This Philadelphia Inquirer story focuses on the early development of a computer model that uses Facebook posts to identify people with depression.
It makes clear that the technology is in its preliminary stages and is not close to being used in a clinical setting. The article did not oversell the technology and discussed the limitations to the research. Still, it would have been helpful if the story had explored who would pay for this kind of screening, and what we know about depression screening in general: Is it effective?
Depression affects quality of life for millions of people around the world. This story is appropriately cautious about how far this technology is from real-world application, though a number of questions remain unanswered.
This work is early in its development and the story makes that clear.
However, it would still be worthwhile to raise some questions: Who would pay for the development of future modeling tools? Who would pay for the training necessary to make these tools useful to health care providers? Who would pay for the time that health care providers would have to put into the work? None of these things would happen in a vacuum, and it would be useful to raise these issues.
The story does not include specific numbers on specificity and sensitivity (see harms, below), only stating that the Facebook model was “moderately accurate.”
Also, we think it would have been helpful to point out that the U.S. Preventive Services Task Force recommends depression screening, giving the evidence a B rating, meaning that there is moderate certainty that screening will be beneficial.
The story did not address potential harms, which include not only the failure to identify someone who has depression, but also the “false positive” misdiagnosis of people who do not have depression. This is the difference between “sensitivity” and “specificity” — and it’s important. Being told that one has a medical problem that one does not actually have can have implications in people’s personal and professional lives. We did like that the story addressed privacy concerns, which can certainly lead to anxiety and other problems.
The story does a good job of articulating the limitations of the research, other than addressing issues regarding specificity and sensitivity (which we addressed above).
No disease mongering here.
The story cites two researchers, each of which discusses the field at large as well as their own work. That’s enough to earn a satisfactory rating here, though it would have been valuable to garner input from a mental health expert, given that the two people quoted — while actively involved in the field — specialize in issues related to emergency medicine (internal medicine) and computer science. There do not appear to be conflicts of interest, though the story doesn’t explicitly state that (or where the funding came from). A look at the relevant paper shows that funding came from the Robert Wood Johnson Foundation and the John Templeton Foundation.
The story does a good job of articulating how the social media model might differ from conventional depression diagnostic tools, as well as how these approaches could potentially be used to support each other (rather than using one in place of the other).
The story makes clear that the social media diagnostic model is not close to being viable for clinical use.
The story does mention an earlier study that demonstrated how Twitter could be used to evaluate depression risk, and provides some good background (with supporting links) on the body of work addressing social media and depression. All of that is sufficient to garner a satisfactory rating here.
However, the story would have been stronger if it had addressed a straightforward question: Is this the first model that can be used to diagnose depression based on someone’s Facebook activity? One assumes the answer is yes, but it’s not clear whether that’s the case.
The story goes well beyond the news release issued by the University of Pennsylvania.