Read Original Story

Inquirer’s look at using Facebook to screen for depression emphasizes research ‘is still in very early stages’


4 Star

Depression often goes undiagnosed. Researchers are turning to Facebook to change that.

Our Review Summary

This Philadelphia Inquirer story focuses on the early development of a computer model that uses Facebook posts to identify people with depression.

It makes clear that the technology is in its preliminary stages and is not close to being used in a clinical setting. The article did not oversell the technology and discussed the limitations to the research. Still, it would have been helpful if the story had explored who would pay for this kind of screening, and what we know about depression screening in general: Is it effective?


Why This Matters

Depression affects quality of life for millions of people around the world. This story is appropriately cautious about how far this technology is from real-world application, though a number of questions remain unanswered.


Does the story adequately discuss the costs of the intervention?

Not Applicable

This work is early in its development and the story makes that clear.

However, it would still be worthwhile to raise some questions: Who would pay for the development of future modeling tools? Who would pay for the training necessary to make these tools useful to health care providers? Who would pay for the time that health care providers would have to put into the work? None of these things would happen in a vacuum, and it would be useful to raise these issues.

Does the story adequately quantify the benefits of the treatment/test/product/procedure?

Not Satisfactory

The story does not include specific numbers on specificity and sensitivity (see harms, below), only stating that the Facebook model was “moderately accurate.”

Also, we think it would have been helpful to point out that the U.S. Preventive Services Task Force recommends depression screening, giving the evidence a B rating, meaning that there is moderate certainty that screening will be beneficial.

Does the story adequately explain/quantify the harms of the intervention?

Not Satisfactory

The story did not address potential harms, which include not only the failure to identify someone who has depression, but also the “false positive” misdiagnosis of people who do not have depression. This is the difference between “sensitivity” and “specificity” — and it’s important. Being told that one has a medical problem that one does not actually have can have implications in people’s personal and professional lives. We did like that the story addressed privacy concerns, which can certainly lead to anxiety and other problems.

Does the story seem to grasp the quality of the evidence?


The story does a good job of articulating the limitations of the research, other than addressing issues regarding specificity and sensitivity (which we addressed above).

Does the story commit disease-mongering?


No disease mongering here.

Does the story use independent sources and identify conflicts of interest?


The story cites two researchers, each of which discusses the field at large as well as their own work. That’s enough to earn a satisfactory rating here, though it would have been valuable to garner input from a mental health expert, given that the two people quoted — while actively involved in the field — specialize in issues related to emergency medicine (internal medicine) and computer science. There do not appear to be conflicts of interest, though the story doesn’t explicitly state that (or where the funding came from). A look at the relevant paper shows that funding came from the Robert Wood Johnson Foundation and the John Templeton Foundation.

Does the story compare the new approach with existing alternatives?


The story does a good job of articulating how the social media model might differ from conventional depression diagnostic tools, as well as how these approaches could potentially be used to support each other (rather than using one in place of the other).

Does the story establish the availability of the treatment/test/product/procedure?


The story makes clear that the social media diagnostic model is not close to being viable for clinical use.

Does the story establish the true novelty of the approach?


The story does mention an earlier study that demonstrated how Twitter could be used to evaluate depression risk, and provides some good background (with supporting links) on the body of work addressing social media and depression. All of that is sufficient to garner a satisfactory rating here.

However, the story would have been stronger if it had addressed a straightforward question: Is this the first model that can be used to diagnose depression based on someone’s Facebook activity? One assumes the answer is yes, but it’s not clear whether that’s the case.

Does the story appear to rely solely or largely on a news release?


The story goes well beyond the news release issued by the University of Pennsylvania.

Total Score: 7 of 9 Satisfactory


Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.