(TNS) -- A month after a Miami-Dade teen took her own life in front of a Facebook Live audience — one of at least three people to do so in recent months — the social media giant is introducing new tools and harnessing artificial intelligence in hopes of detecting cries for help in time to do something.
On Facebook Live, someone whose post is reported will see his or her screen partially blocked by a message from the company that reads, “Someone thinks you might need extra support right now and asked us to help.”
The troubled user can contact a helpline, view tips or message a friend directly from that window. If the user chooses to hide the resources, the icons tuck into the bar next to the “finish” button, where they are visible for the duration of the live broadcast. The user can access them anytime.
Likewise, the person who reports the video gets connected to tips on how to talk about suicide with a friend, or directed to contact a mental health professional via a lifeline. Another option connects the reporter to a chat with a mutual friend of the person in trouble. The process, Facebook said, is confidential for both the person who reports the content and the person who posts it.
Experts from suicide prevention organizations helped Facebook refine the tools.
Jennifer Guadagno, Facebook’s lead researcher for suicide prevention, told the Miami Herald on Tuesday that the organizations warned the company to be cautious about cutting off live videos too quickly, even if someone is threatening suicide. The real-time chat provides an opportunity for friends and family to offer support and engage with the person in trouble.
According to news reports, this feature has saved a handful of people in the same situation: Viewers reportedly called the police and stopped suicide attempts in Minnesota, Arizona, Bangkok, Hong Kong and Ohio.
But for others, such as 14-year-old foster child Naika Venant, help came too late. For situations like that, Guadagno said Facebook developed partnerships with organizations like Crisis Text Line so people in trouble can immediately chat with a trained mental health professional directly in the Facebook messenger app at any time.
These tools are extensions of resources previously available on the site and elsewhere online, as part of an effort to “reduce friction” between suicidal people and services to aid them.
“We’re trying to meet people where they’re at,” Guadagno said.
Facebook’s fresh idea is leveraging all the previous statuses and videos that were reported as suicidal (and the comments on them). It will use pattern recognition software to alert community monitors to content that is potentially about suicide or self-harm.
“It’ll be a bit of artificial intelligence, too,” said Vanessa Callison-Burch, a product manager for Facebook’s new tools. “We’re hopeful the combination of technology and connection to friends and family can help people.”
Facebook’s 24/7 globe-spanning community moderation team will prioritize self-harm content above other reported posts, such as those dealing with nudity or violence or hate speech. Facebook wouldn’t say how big the moderation team is, but Callison-Burch said that team members are trained to recognize suicidal behavior.
The tools will start to roll out in March, she said, with the pattern recognition software coming last so developers have more time to tune up the algorithm.
But critics say none of these features will get authorities in the door. For that, Facebook relies on a user’s friends. They’re the ones who have to guide law enforcement to their friend in need.
©2017 Miami Herald Distributed by Tribune Content Agency, LLC.