IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Facebook Struggles to Address Violent Videos, Live-Streamed Crimes

Facebook’s terms of use bar offensive and gratuitously violent imagery. But the mechanisms for removing content that violates the rules remain murky. And there is no appeals process.

(TNS) -- Two days after a gruesome killing was broadcast on Facebook for millions to see, the man suspected of slaying a Cleveland grandfather shot himself dead in Pennsylvania.

Thousands of miles away, at Facebook’s biggest event of the year, Mark Zuckerberg walked out onto a stage.

The CEO paused between extolling Facebook’s role as a virtual community and introducing the platform’s new augmented reality features to address the Easter homicide.

“Our hearts go out to the family and friends of Robert Godwin Sr.,” Zuckerberg said of the 74-year-old Cleveland man who was shot to death in a video that was viewed millions of times over almost three hours. “We have a lot of work to do, and we will keep doing all we can to prevent tragedies like this from happening.”

The auditorium was quiet for a moment, devoid of applause or murmurs.

Then Zuckerberg, addressing the company’s developer conference in San Jose, demonstrated how users could use Facebook to turn a house into a virtual Hogwarts, the castle from Harry Potter.

The transition was jarring. And, some critics said, indicative of Facebook’s refusal to address abuses head-on — a reticence that has once again thrust Facebook into the center of a debate over how technology can be used to broadcast injustices, propagate violence and immortalize acts of hate.

“Did they not anticipate that some bad people somewhere in the world would take Facebook and put bad things on it?” said Michael Connor, the executive director of Open MIC, a nonprofit that organizes shareholders to advocate for corporate policy changes. “Give me a break.”

Facebook has addressed violent videos and live-streamed crimes by shifting the blame, Connor said. Similarly, when people brought up “fake news,” the company pointed to its originators — people who sought to gin up reactions to turn a profit or sow discord and confusion — rather than the social network on which those false or misleading stories spread.

On Tuesday, the company made a public about-face.

“We know that people don’t want to be lied to or deceived on our platform, and that is a role we take 100 percent responsibility for,” Chris Cox, Facebook’s chief product officer, said during a presentation on Facebook’s relationship with the media. “We’ve put a lot of our teams up against this problem: How do we make sure people can’t spread false news and disinformation on our platform?”

As the company looks to lean more heavily on video, which will be augmented with animated tools, enhanced by artificial intelligence and made more immersive using 360-degree cameras and virtual-reality techniques, the risk of abuse only grows, critics said. Yet so does the danger of shutting down the expression it hopes to foster.

“Social media companies need to think very carefully about what next steps they’re going to take,” said Malkia Cyril, executive director for the Center for Media Justice. “It’s a fine line to walk between maintaining safety and maintaining freedom of speech.”

Facebook’s terms of use bar offensive and gratuitously violent imagery. But the mechanisms for removing content that violates the rules remain murky. And there is no appeals process.

Transparency would help allay people’s fears, Connor and Cyril said, adding it also could restore trust between Facebook and its users.

For its part, Facebook in recent months has engaged journalists, fact-checkers and media watchdogs to address propagators of fake news, trying to halt the spread of false information.

That, Connor said, is a start.

“This problem is going to be with them forever,” he said. “Tech platforms like Facebook are brokers of content and truth on a global scale. It’s good that they’re waking up.”

Some media experts have suggested the spread of video may help combat fake news, because it’s a lot harder to fake a video than a written article. But the medium brings its own problems in spreading graphic images of violence and crime.

Justin Osofsky, Facebook’s vice president of global operations and media partnerships, reacted to the Easter slaying on Monday by saying the company needs “to do better” in stopping videos like that from appearing in the first place.

“It was a horrific crime,” Osofsky wrote. It “has no place on Facebook, and goes against our policies and everything we stand for.”

Facebook is not the only tech company to struggle with monitoring for hateful, violent and otherwise offensive content.

Twitter has for years been criticized for not effectively addressing hate speech. YouTube, which is owned by Google, has recently been criticized for running ads over racist videos.

“Neither governments nor corporations should inhibit speech, but there are lines around that meant to protect people’s safety; you can’t just cry fire” in a crowded theater, Cyril said. “Those lines need to be adapted for the technological age, and we need to have clear, agreed-upon guidelines. It’s difficult. It’s a tightrope. But tech companies need to learn how to walk it for all our sakes.”

©2017 the San Francisco Chronicle Distributed by Tribune Content Agency, LLC.