Clicky

mobile btn
Tuesday, November 26th, 2024

Social Media leaders testify about efforts to curb terrorist recruiting, propaganda online

© Shutterstock

Leaders of Facebook, Twitter, and YouTube recently appeared before a Senate panel to highlight efforts to curb terrorist recruitment, to ban users with extremist ideology, and to remove “how-to manuals” and other terrorism-related content.

The Senate Commerce, Science and Transportation Committee heard testimony from Monika Bickert, the head of global policy management at Facebook, and Carlos Monje, the director public policy and philanthropy at Twitter, among others.

U.S. Sen. Ron Johnson (R-WI), the chairman of the House Homeland Security Committee, questioned what steps, aside from removing content, the platforms had taken to limit the spread of “dangerous materials.”

Monje testified that Twitter had been “tackling the issue of terrorist content” for many years. He noted that the number of suspended accounts has climbed steadily in recent years, from 67,069 in 2015 to 569,202 in 2016 to 574,070 in 2017.

“Our progress fighting terrorists on Twitter is due to the commitment we have made internally to harness innovation to address the tactics employed by terrorist organizations on our platform,” Monje said in his opening remarks. “While there is no ‘magic algorithm’ for identifying terrorist content on the Internet, we have increasingly tapped technology in efforts to improve the effectiveness of our in-house proprietary anti-spam technology. This technology supplements reports from our users and dramatically augments our ability to identify and remove violative content from Twitter. Through these efforts, we have found success in preventing recently suspended users from coming back onto Twitter.”

Johnson also questioned whether Facebook was developing tools to identify users who were at an elevated risk for terrorist activity, similar to technology being developed to identify users at elevated risk for suicide.

“We know from the many terrorism academics and experts we work with that terrorists tend to radicalize and operate in clusters,:” Bickert said in her opening remarks. “This offline trend is reflected online as well. As such, when we identify Pages, groups, posts, or profiles that support terrorism, we use AI to identify related material that may also support terrorism. As part of that process, we utilize a variety of signals, including whether an account is “friends” with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”

Bickert said that “counterspeech” is also an important tool to counter terrorism recruitment. By “disrupting the underlying ideologies,” the platform can combat hate and violent extremism, she said.

“Over the past three years, we have commissioned research on what types of counterspeech are the most effective at combating hate and violent extremism,” Bickert said. “Based on that research, we believe the credibility of the speaker is incredibly important. We have therefore partnered with non-governmental organizations and community groups around the world to empower positive and moderate voices.”

Both Monje and Bickert highlighted the need for social media platforms to do more to combat terrorism as extremist groups constantly adapt and evolve to sidestep countermeasures.