IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Facebook Plans Changes to Combat Violent Live Streaming

Even with 7,500 Facebook monitors around the world, the possibility of tragedy still looms.

(TNS) -- After coming under intense scrutiny, Facebook announced Wednesday that it plans to hire 3,000 more employees around the world to review violent videos, suicides and other problematic behavior on the social media site.

After a string of tragedies posted on Facebook, CEO Mark Zuckerberg said the company would add to its current staff of 4,500 people who already monitor such activity. In addition, he said they will make it easier for users to report problems, faster for their reviewers to determine which posts violate standards, and easier for the tech giant to contact authorities when needed.

While many praised the announcement, others questioned how effective it would be.

Sarah Roberts, assistant professor at UCLA’s Department of Information Studies, said nearly doubling the workforce of these monitors at Facebook would clearly have an impact. But even with 7,500 Facebook monitors around the world, she said, “what they will actually be able to adjudicate and make decisions about is a small subset of the amount of content that Facebook receives in any given moment.”

The Menlo Park-based Facebook, which has 1.9 billion users, walks a fine line when it engages in such moderation practices in terms of potentially making users unhappy — or making that presence felt in a way that’s unpleasant to them, she said.

In addition, she said Facebook created “an incredibly powerful tool” with Facebook Live, asked people to use the live video streaming service as they will and then is surprised or unprepared to deal with the tragic yet obvious consequences.

“As long as Facebook Live exists, the matter of another horrible incident happening is not when, it’s if,” she said. “That’s just human nature.”

Facebook faced criticism for not doing more to curb such videos from spreading on its service. In one recent instance, Steve Stephens was accused of randomly killing 74-year-old Robert Godwin Sr. in Cleveland, Ohio on Easter Sunday while recording it — and then uploading it to Facebook.

In Southern California, 33-year-old actor Frederick Jay Bowdy broadcast his suicide on Facebook Live from North Hollywood in January.

“The timing is not accidental; it happened because they’ve had some high profile problems,” said Karen North, professor of social media at the USC Annenberg School for Communication and Journalism.

North argued that the move will allow Facebook to cast a “bigger and better net to catch more problems,” including those that involve troubled people who need help.

• RELATED STORY: Southern California actor streams his suicide live on Facebook

State Assemblyman Matt Dababneh, D-Encino, said he’s “very appreciative” of Facebook’s efforts to catch up to their own technology and the unintended consequences of it.

Dababneh, a member of the California Legislative Technology and Innovation Caucus, introduced a bill in March dubbed Jordan’s Law that if approved, would criminalize social media-motivated attacks. The proposed bill is named after 14-year-old Jordan Peisner, a San Fernando Valley teen who sustained serious head injuries after a boy he did not know sucker-punched him in the head outside a Wendy’s in December.

The brutal assault was captured on video and posted on Snapchat.

A civil lawsuit filed this week by Jordan’s father contends the video was uploaded on the social media site so those involved “could achieve notoriety and social media popularity and fame.”

“If young people realize that these types of attacks or other graphic videos or other posts of this nature won’t be tolerated online and will be removed quickly, that de-incentivizes their perverse interest in doing it in the first place,” Dababneh said.

Jordan’s father Ed Peisner, of West Hills, called the move by Facebook “a great start.” He said he thinks part of the problem is the real-time nature of Facebook Live, which doesn’t allow the company to review video before it’s posted.

“It gives everybody this platform where they think they’re famous and unfortunately, we have a society today that has to keep up with the Kardashians and one has to one-up everyone else,” he said.

Steven Freeman, deputy director of Policy and Programs at the Anti-Defamation League in New York, said because of the large volume of content posted on Facebook, it’s really important for users to know how to flag potentially inappropriate posts. The ADL has published a Cyber-Safety Action Guide on its website for this very purpose, he said.

“It’s about letting people know what tools are available and encouraging the company to make sure those tools are easy to find and easy to use,” Freeman said.

Los Angeles County Sheriff’s Capt. Bobby Wyche, of the Special Operations Division, said anything Facebook can do to make potential crimes easier to report and give information to authorities is beneficial.

“The sooner we can find out about a particular crime, the more it helps us to commit resources to that,” he said.

North, of USC, said it’s likely that Facebook will also move fast to improve its algorithm to identify inappropriate posts. While they have an algorithm that identifies hate speech and some images, such as nudity, it’s much more difficult for a computer to identify images that are not clearly inappropriate or dangerous, North explained.

A picture of a gun or a noose, for example, could be an artistic project or the trailer of a movie rather than a murder or a suicide, she said.

“My guess is they’re working very hard and very fast to improve that,” North said.

©2017 the Daily News (Los Angeles) Distributed by Tribune Content Agency, LLC.