This is a story about the use of an app called PulsePoint in the care of a Seattle victim of sudden cardiac arrest.
In the process of telling a patient story that ended well, the reader is provided with information on the app’s development, its current use across the country and how the reader can become involved in what can best be described as the crowdsourcing of out-of-hospital care for victims of cardiac arrest.
While certainly an intriguing use of technology that merits news discussion, the story left us with questions: What research was conducted to show this app is beneficial? And how solid was that research? What are the potential harms or downsides of using an app in this way? How typical was the patient’s story?
An American Heart Association report released in 2015 noted that there are approximately 326,000 cases of out-of hospital-cardiac arrests in the United States. The survival rate with good neurologic function is approximately 8%. Importantly, just about one-third of the victims survive when cardiac arrest is witnessed.
An app that advises people nearby of an event, especially those who are trained in CPR, is likely to save additional lives–but how do we know for sure? And are the potential harms worth it? The story should have explained how proof is being collected, if at all.
The story notes how the PulsePoint Foundation’s app is available for free download for both iOS and Android. We think that it would have been useful to put the URLs in the story, though, as well as note that CPR training is available for free in most locations.
The story puts the benefits of the PulsePoint app into perspective: “It’s not clear how many lives have been saved,” it states, further hedging: “Patient confidentiality laws often prevent hospitals from disclosing a patient’s outcome.”
This is enough to merit a Satisfactory rating–but with several caveats:
Harms and downsides weren’t discussed. But with this app being used so widely and in so many cities, isn’t data available?
One simple harm is the app simply failing to work. Another harm could be untrained or inexperienced people trying to help and causing harm from not knowing what they’re doing. Do people who sign up to use the app have to prove any sort of competency? Also, could bystanders who respond become hurt? For example, are 911 operators first establishing if the situation is safe for others to intervene before issuing an alert?
As mentioned above, no one is sure how well it works to save lives, and the story makes this clear. But it should have gone a step further and clarified: Has any research at all been done? On the same note: Why do more cities keep adopting it? What evidence is being used to convince them to sign up?
The reader is provided with quotes from Mr. DeMont and his wife, the developer of the app and two of the responders as well as information from unnamed Seattle officials.
However, we wanted to hear from an outside emergency medicine expert to weigh in and respond to the story’s information/claims.
We don’t think that the direct alternative (no access to PulsePoint linkage to a locale’s 911 system) needed to be discussed.
It is clear that the app is now available and for free.
It’s clear that this is a feature story about a readily available app and how more people are finding out about it and using it, so we’ll rate this N/A. However, it would have been useful to mention if there are any other apps that work in this way, or not.
There is no evidence that the story relies on a news release.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like