The United Kingdom’s Home Secretary Amber Rudd is set to face an uphill battle this week as she meets with leading tech companies in Silicon Valley to discuss issues related to terrorist exploitation of information and communications technologies (ICT), according to a recent blog from David P. Fidler, adjunct senior fellow for cybersecurity at the Council on Foreign Relations.
The meeting is representative of a pattern in which governmental, private sector, and civil society actors repeatedly state that more action would be effective. This pattern, Fidler writes, has become all too familiar and its reappearance in 2017 underscored long-standing questions about strategies against terrorist exploitation of cyberspace.
Combating terrorist exploitation of ICT typically involves “counter-content” and “counter-narrative” activities. In 2016, American, British and Australian also added a “counter capability” facet to its strategy.
“Despite actions across this ‘counter’ triad, the director of the U.S. National Counterterrorism Center (NCTC) argued in May that the Islamic State’s global reach ‘is largely intact’ and continues ‘to publish thousands of pieces of official propaganda and to use online apps to organize its supporters and inspire attacks,’” Fidler wrote.
In response to claims that social media companies did not do enough to remove terrorist-related media from its sites, Facebook, Microsoft, Twitter, and YouTube recently created the Global Internet Forum to Counter Terrorism, an initiative that builds on previous counter-content programs such as the Shared Industry Hash Database. The database works by compiling lists of digital “fingerprints” for violent terrorist imagery or recruitment videos and flags further posts for removal.
The forum will also focus on artificial intelligence as a counter-content strategy, a move that follows calls made by the U.S. Department of Justice (DOJ) to use machine learning to combat extremist propaganda. Facebook, in particular, has stated that it will use artificial intelligence as a means to remove terrorist-related content and media.
“The embrace of artificial intelligence represents the most important change to emerge from the latest recriminations against social media companies,” Fidler wrote. “However, relying on machine learning will exacerbate concerns that expanding counter-content measures harms freedom of expression without helping counterterrorism.”
One area that received scarce attention, according to Fidler, was ISIS’ use of encryption and the dark web. However, political calls to regulate encryption, including those after the attack in Manchester, caused a rift between national security officials, cybersecurity experts, and human rights advocates.
“Efforts to counter ICT terrorism face deteriorating cybersecurity conditions around the world,” Fidler wrote. “The lack of U.S. leadership, fears about Russian cyber-meddling in elections, global ransomware attacks, the proliferation of government-sponsored hacking operations, and disintegration of consensus on international law’s application in cyberspace make collective action difficult.”