News

Congressional scrutiny of social media company moderation grows in wake of U.S. shootings

Although the Internet was, at one time, deemed to be one of the last frontiers, its uses have changed and multiplied exponentially over the years – and now, a collection of 13 members of Congress have written social media companies asking for evidence of response to violent threats and harmful content.

Traditionally, these companies have operated similar to town halls and forums of old – essentially acting as venues for others to practice free speech but not being responsible for the content of that speech at large. This position has come under increased scrutiny in the past decade, as lawmakers and the public have pointed out parallel growth between violent incidents online and in the physical world, with shootings at the top of the list.

In their letter, the lawmakers pointed to high-profile cases such as Highland Park, Ill.; Uvalde, Texas; and Buffalo, N.Y. – where shooters assailed a crowd, an elementary school, and a grocery store, respectively. Though these have dominated the headlines in recent days, there have been more than 350 mass shootings in the United States this year.

“As Members of Congress, we continue to grieve the lives lost and the tragic impact to our communities, but it is also a call to action,” the representatives wrote. “As representatives, we must always be asking whether there are ways to prevent mass shootings like this. The more that we learned about the shooters, the more we learned that there were warning signs which, if properly heeded, may have prevented these tragedies. Sources revealed that the shooter in the Highland Park incident frequently used the social media platform, Discord, to share violent materials, including videos depicting him committing mass murders. The shooter in Buffalo also used Discord to document his plans for the shooting and live streamed it on Twitch.”

Shooters, they added, often post clues to their violent intentions online, but even when that content is reported, in many cases, companies fail to act. Youtube, Twitter, Meta, Discord, TikTok, Twitch – all these and more were pointed out by the writers as having dealt with troubling content flagged by users.

With that in mind, they asked representatives of the companies to address and define their content moderation policies and dedicated staff, their procedures and turnaround for dealing with flagged content, how often cases have been reported to local law enforcement, and what actions they have taken to improve their reporting mechanisms and moderation in the wake of increased mass shooting, hate crime, and other recent attacks.

The effort was led by U.S. Reps. Brad Schneider (D-IL), Tony Gonzales (R-TX), and Brian Higgins (D-NY).

Chris Galford

Recent Posts

Embattled TikTok in jeopardy as President Biden signs legislative ban

The ByteDance-owned TikTok faces an uphill battle in the United States after President Joe Biden…

2 days ago

Raytheon begins $115M expansion of Alabama missile integration facility

Promising to grow space for integrating and delivering on critical defense programs by more than…

2 days ago

Reward offered for Iranian nationals charged over multi-year cyber campaign against U.S. companies

In unsealing a 13-page indictment this week, the U.S. Department of Justice (DOJ) revealed charges…

3 days ago

FEND OFF Fentanyl Act included in national security supplemental

A bill targeting the illicit fentanyl supply chain, the Fentanyl Eradication and Narcotics Deterrence (FEND)…

3 days ago

Pennsylvania earns $10M federal grant to improve crime statistics reporting

In order to move the state closer to federal standards and allow reporting of local…

4 days ago

DoD innovative technologies pilot funds 13 additional projects

For the next round of participants in a pilot program to Accelerate the Procurement and…

4 days ago

This website uses cookies.