Facebook’s process for monitoring its users’ posts in an effort to predicting suicides has some healthcare experts worried that it can backfire and escalate mental-health crises.
Facebook’s suicide-monitoring technique reveals a daunting gap between tech giants and healthcare experts, according to a recent report by SFGate, which adds that one Harvard psychiatrist is concerned that the tool could worsen health problems by concentrating on the wrong people or escalating mental-health issues.
The tech giant’s monitoring tool seeks to identify posts that may indicate someone is at risk of committing suicide and has reportedly been involved in sending emergency responders to locations on more than 3,500 occasions since last fall.
The idea behind monitoring Facebook posts that could potentially reveal a user’s intent to commit suicide was supposedly stemmed from several people who had used the platform to publicly broadcast their suicides in real time, according to SFGate.
The report adds that a suicide-monitoring algorithm was then implemented at the request of Facebook CEO Mark Zuckerberg, and that it has been active since 2017, using pattern-recognition technology to identify posts and livestream videos that appear to indicate a user’s intent to commit suicide.
The algorithm reportedly scans comments that viewers leave on livestream videos, such as “Are you OK?” which then flags the post, sending it to a content moderator, and then to a trained staff member tasked with notifying emergency personnel once the post is deemed as an indication of a potential suicide.
John Torous, a Harvard psychiatrist and tech consultant who has spent years working with tech giants on scientific research, has expressed concern over Facebook’s suicide-monitoring algorithm, suggesting that it may be doing more harm than good.
Facebook, however, does not consider its suicide-monitoring technique to be health research, and therefore, has not published any information on how the tool works or whether it is successful. The tech giant views its algorithm not as a health or research initiative, but an approach to connect potentially suicidal users with the right people who can help.
Torous argues that the lack of public information about Facebook’s suicide-monitoring tool makes it impossible to answer important questions, such as whether the tool might be resulting in false positives or escalating a mental health crisis.
“It’s one thing for an academic or a company to say this will or won’t work,” said Torous to Business Insider, “but you’re not seeing any on-the-ground peer-reviewed evidence — it’s concerning. It kind of has that Theranos feel.”
The report also noted that Facebook’s suicide-monitoring tool is just one of several examples of new products and services — such as Apple Watch, Amazon’s Alexa, and meditation apps — can create a gap between health innovation and tech disruption. Healthcare experts see red flags, whereas tech leaders see revolution.
“There’s almost this implicit assumption that they play by a different set of rules,” added Torous.
Daniel Reidenberg, the founder of Save.org, disagrees with Torous, telling Business Insider that he believes “there isn’t any company that’s more forward-thinking in this area,” adding that he himself has helped Facebook by sharing his own experiences, as well as bringing in people who have struggled with suicide and having them share what helped them.
Torous, however, says that he is familiar with the notion that suicide can be predicted in that manner, and is still skeptical of Facebook’s suicide-monitoring approach, adding that he has discovered several studies in which researchers have analyzed 64 different suicide-prediction models, concluding that they all had virtually no success in predicting suicide attempts.
“We know Facebook built it and they’re using it,” said Torous, “but we don’t really know if it’s accurate, if it’s flagging the right or wrong people, or if it’s flagging things too early or too late.”