Social media companies under pressure to share disinformation data with one another
WASHINGTON - As a gruesome video showing the beheading of American journalist James Foley swiftly circulated online in 2014, Twitter and YouTube decided to ban the graphic content from their platforms.
As they raced to stop the videos spread under pressure from people tweeting #ISISmediablackout, major technology companies began to coordinate informally according to a former Twitter employee who spoke on the condition of anonymity because they were not authorized to speak on the record. The response marked one of the first times the technology companies worked together to identify problematic content and remove it from their competing platforms.
Twitter, which didn't host native video at the time, began notifying video services such as YouTube of links to various versions of the video so the video service could remove them. The ex-Twitter employee said the incident was a "turning point" for the industry. "It was a big wake-up call," the employee said. "It brought everyone together very quickly."
The Foley example is illustrative of what isn't yet formally happening among major tech companies as they face perhaps their biggest challenge yet from Washington over botched efforts to fight Russian disinformation during the 2016 election - and then struggled to publicly disclose the information they found.
Though the industry now has more formal channels to help it share and identify terrorist content or child exploitation on platforms, the sharing of data about disinformation campaigns among social media networks has largely been informal and voluntary. It has not followed the example of the financial-services sector, which has - for example - found some effective information-sharing strategies in the event of a major breach.
This week, a pair of reports commissioned by the Senate Intelligence Committee underscored that Russia carried out its disinformation campaign during the 2016 election on virtually every social network even though most press reports focused on the Kremlin's efforts on Twitter and Facebook. As platforms such as YouTube, Instagram and even Pinterest were in the spotlight this week, incoming House Intelligence Committee chairman Adam Schiff, D-Calif., told The Washington Post a top challenge is tech companies "sharing information among each other," noting it might "need to be addressed legislatively."
Facebook, Google and Twitter previously said before the 2018 midterm elections that they were cooperating on election integrity efforts - but they've provided very little detail about what efforts they might have implemented or are even considering to share information.
When asked Tuesday about strategies for sharing information with one other, Facebook declined to comment and Twitter did not immediately respond. Google said it shares information with industry and law enforcement.
There are some complex and unique barriers to information sharing among the big tech firms, experts said. First off, there's no real consensus on what constitutes disinformation, raising the question of when information should be shared to begin with. There are also major differences in the way platforms display content, making similar efforts across platforms harder to track and identify. And the companies are fiercely competitive and have not always presented a public, unified front on disinformation campaigns.
Any collaboration also comes with serious concerns about protecting consumer privacy, something lawmakers are also intensely concerned about. But a Tuesday report from The New York Times revealed that Facebook gave other technology companies like Amazon and Microsoft much broader access to its users' personal information than previously disclosed. If Facebook is willing to share personal data for financial benefit, it may be harder for the company to argue that it can't share data about disinformation threats because it is concerned about privacy.
Priscilla Moriuchi, the director of strategic threat development at Recorded Future, said there's a great deal of interest in threat sharing right now but coordination is often very limited to small, trusted groups.
"It's one of those things that everyone thinks is a great idea," she said. "But in practice, it's more complicated and not as effective as we think."
She pointed to the challenges inherent in sharing information across platforms that display different content said - while other companies might share a common web link, Facebook won't necessarily be helped if Twitter gives it a suspicious Twitter handle.
The companies could get around this by sharing data like IP addresses, which could potentially be used to see whether the same device is posting disinformation on multiple platforms.
Any coordination may be further hampered by the intense competition among the tech giants.
A recent New York Times story exposed how Facebook's strategy in Washington has often attempted to shift attention from its own problems by drawing scrutiny to its competitors. For a fall hearing on Capitol Hill, Google refused to send an executive who lawmakers deemed senior enough as the company tried to distance itself from Twitter and Facebook on election-security issues. The result was a rash of negative media coverage for the search giant that distracted from Facebook's time in the hot seat.
Bruce McConnell, a former top Department of Homeland Security cybersecurity official, said the competitions creates a unique challenge for tech companies. He said internally, the companies are building tools and databases to make it easier to identify and evaluate disinformation.
"They could share those tools and approaches with each other in a more formal way," he said.
McConnell pointed out there are relatively few players in the social media industry, and they are all established and know one another. And the incentive to figure it out should be crystal clear.
"Do the companies want to get their act together individually and collectively? Or do they want the government to regulate them?" McConnell said.