(TNS) -- The next push for more transparency in how Facebook and Alphabet deal with fake news and hate speech may come not from users or executives, but from investors.
Shareholders will go into the companies’ annual meetings next month with exhaustive research suggesting the corporations are not doing enough to stem the flow of lies and misinformation, thanks to a report released Tuesday by an organization that advocates for corporate policy changes.
The report, published by the nonprofit Open Mic, recommends that Facebook and Alphabet, parent of the Google search engine and YouTube video site, release annual assessments of the impact of propaganda, fake news and hate speech, as well as what they’re doing to address those issues. Several shareholders say those policy changes would be good for business and would help restore public confidence in two of the biggest tech corporations in the world.
“These companies would have you believe that (fake news) is kind of a temporary hiccup that they’re experiencing,” said Michael Connor, Open Mic’s executive director. “When you dig down and find out what is at play here, there are some serious governance issues and management issues in terms of how these companies have developed technology and launched that technology into the world, and then waited until long after that to deal with the unintended results of it.”
The first test of these recommendations will come in a few weeks, when Facebook investors convene for the company’s annual shareholders meeting on June 1.
Natasha Lamb, managing partner of investment firm Arjuna Capital, will ask stakeholders to vote on a proposal to require that the social networking company issue a report “reviewing the public policy issues associated with fake news enabled by Facebook. The report should review the impact of current fake news flows and management systems on the democratic process, free speech and a cohesive society, as well as reputational and operational risks from potential public policy developments.”
Arjuna created the proposal in conjunction with investment management firm Baldwin Brothers.
Though Lamb is not confident the measure will pass, she hopes it sparks a conversation about how Facebook can be a more responsible arbiter of information.
A similar proposal has been drafted and is expected to be voted on at Alphabet’s annual shareholders meeting on June 7.
Open Mic’s report will bolster shareholder proposals like hers, Lamb said.
“Facebook, whether they like it or not, controls the conversation,” Lamb said. “They have a responsibility to ensure their platform is not being misused by propaganda, misinformation and hate speech.”
Since the November election, Facebook and Google have been grappling with the issue of fake news — misinformation meant to sow confusion and division for economic and/or political gain — and their roles as traffic cops on the information superhighway.
First they cut cash flow to fake-news websites. Then they began to offer services meant to help users discern what information is true and what is not.
But, in an effort to maintain distance from the controversy over fake news, the companies have largely outsourced fact-checking and turned to other organizations, or users themselves, to police the Web.
“As investors, right now we’re relying on third-party analysis to determine whether or not what Facebook and Google are doing is actually effective,” Lamb said. “We saw that Google banned 200 publishers in 2016. That seems like a drop in the bucket. Did that fix the problem? How pernicious is the problem? How many people do they have on board dealing with this? These are some of the questions we want answered.”
Among the recommendations outlined in Tuesday’s report, Open Mic has called for tech companies to start acting more like media companies, a term that both Facebook and Google have adamantly resisted.
The organization suggests tech companies should hire “ombudspersons” or other figures to hold the companies accountable to their users and offer assessments of how corporate actions may impact the public interest.
Tuesday’s report also recommends that Facebook and Alphabet publish annual impact assessments on fake news, propaganda campaigns and hate speech that “are transparent, accountable” and provide an opportunity for those affected by any fake-news crackdown to appeal.
Those assessments, the Open Mic report said, should “include definitions of these terms; metrics; the role of algorithms, the extent to which staff or third parties evaluate fabricated content claims; and strategies and policies to appropriately manage the issues without negative impact on free speech.”
Last month Google announced a product it calls Fact Check, which will pair headlines with a notification stating whether the claim has been verified by a reputable news agency or fact-checking organization.
Facebook introduced a similar feature in partnership with fact-checking groups like PolitiFact and FactCheck.org in December. Fact-checking organizations agreed to verify questionable claims on Facebook in order to provide users an impartial assessment of accuracy.
“Google and Facebook are data companies — they have billions of bits of data,” Connor said. “So what we’re saying is, show us how effective those efforts are. In the absence of facts, those efforts are just window-dressing.”
Nearly two-thirds of Americans reported feeling confused about basic facts of current events and issues due to made-up news stories, according to a December study from the Pew Research Center.
But fake news doesn’t just damage the reputations of big tech companies like Facebook and Google, Lamb said — it’s bad for their business. She cited fines being levied in Germany against companies that allow fake news or offensive content to remain online, investigations in the United Kingdom, and comments from American legislators on both sides of the aisle who believe the government has a responsibility to curtail the spread of fake news.
“There’s a public policy risk here,” she said. “There’s also a palpable erosion of trust, which can affect how sticky these companies are.”
Alphabet has been under fire since a March investigation by the Wall Street Journal found that Google had been running advertisements on YouTube videos containing hate speech or espousing white-supremacist points of view. Though this is hardly the only instance of objectionable speech on social media, Lamb said, it is the clearest example of how a company not addressing offensive content can impact its bottom line.
YouTube has since lost millions in advertising as companies — including AT&T, Verizon, Pepsi and Johnson & Johnson — pulled their ads from the online video site.
Last month, Google released a report that stated 0.25 percent of its daily traffic returns offensive or “clearly misleading content, which is not what people are looking for.”
Facebook and Google have for months cut off fake-news sites’ access to their advertising tools in an effort to remove incentives for those who publish misleading or false content for profit. Both companies also have a list of other types of websites they ban from their advertising tools, including sites with offensive content such as hate speech and pornography.
But some investors worry that by simply blocking those websites — or removing offensive content as it arises — the companies may be playing a game of Internet Whac-A-Mole without a broader strategy.
“How does Google or Facebook decide what is hate speech? Who makes that call? And when they do — or an algorithm does — what happens? We don’t really know,” Connor said. “What troubles us is (policing offensive content) happens on an ad hoc basis. It’s not clear to us that it’s any part of a well-developed plan.”
©2017 the San Francisco Chronicle Distributed by Tribune Content Agency, LLC.