IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

L.A. Will Roll Out Predictive Policing by Monitoring Tweets in Real-Time

During a three-year experiment, researchers will be monitor millions of tweets related to the L.A. area in an effort to identify patterns and markers that prejudice-motivated violence is about to occur in real time.

(TNS) -- Can police prevent hate crimes by monitoring racist banter on social media?

Researchers will be testing this concept over the next three years in Los Angeles, marking a new frontier in efforts by law enforcement to predict and prevent crimes.

During a three-year experiment, British researchers working with the Santa Monica, Calif.-based RAND Corp. will be monitoring millions of tweets related to the L.A. area in an effort to identify patterns and markers that prejudice-motivated violence is about to occur in real time.

The researchers then will compare the data against records of reported violent acts. The U.S. Department of Justice is investing $600,000 in research by Cardiff University Social Data Science Lab, which has been at the forefront of predictive social media models

Cardiff University professor Matthew Williams said the research is designed to eventually enable authorities to predict when and where hate crime is likely to occur and deploy law enforcement resources to prevent it.

“The insights provided by our work will help U.S. localities to design policies to address specific hate crime issues unique to their jurisdiction and allow service providers to tailor their services to the needs of victims, especially if those victims are members of an emerging category of hate crime targets.”

His lab’s previous research in the United Kingdom found that Twitter data can be used to identify areas where hate speech is occurring but where no hate crimes have been committed. This can be useful, researchers said, in neighborhoods with many new immigrants, who are unlikely to report the crime because of fear of deportation.

In 2012, an estimated 293,800 nonfatal violent and property hate crimes occurred in the United States, according to the Bureau of Justice Statistics. About 60 percent of those were not reported, the Justice Department found.

Of course, there is a big difference between someone spouting off on Twitter or Snapchat and an actual hate crime.

“It is a great idea in the abstract. But it is not the panacea you might think,” said Brian Levin, executive director of Cal State San Bernardino’s Center on Hate and Extremism. “The problem is the correlation and reliability. … There are many different forms of social media.”

Levin, who has tracked both Middle Eastern terror groups and local neo-Nazi organizations, also noted that some hate groups don’t advertise their work on social media.

“Local tensions may arise to fly and be absent from social media,” he said. “Some segments of the community shun social media … so examining social media as a predictor can be a bit like having one screwdriver and sometimes it doesn’t work.”

Predictive policing already is in use at the Los Angeles Police Department and other agencies. The LAPD uses a predictive policing algorithm to deploy officers to locations where prior crime patterns strongly suggest similar crimes may occur. As crime during the last two decades has dropped dramatically across the nation and Los Angeles, police commanders are increasingly looking for any edge they can get in cutting crime.

L.A. County is particularly useful because a huge volume of social media produces massive data sets that increase the accuracy of predictive models over traditional crime analysis and trend-chasing, said Pete Burnap, from Cardiff University’s School of Computer Science and Informatics.

“Predictive policing is a proactive law enforcement model that has become more common partially due to the advent of advanced analytics such as data mining and machine-learning methods,” he said.

Traditional predictive police modeling has paired historical crime records with geographical locations and then made a probable calculation to predict future crimes. But Twitter and social media-based models work in real time using what people are talking about now. The algorithms look for particular language that is likely to indicate the imminent occurrence of a crime.

British researchers began looking at cyber-hate in the aftermath of the killing of British Army soldier Lee Rigby at the hands of Islamic extremists on a London street in 2013. Analysts collected Twitter data and tested a text classifier that distinguished between hateful and antagonistic responses focusing on race, ethnicity and religion.

©2016 Los Angeles Times Distributed by Tribune Content Agency, LLC.