Los Alamos National Laboratory researchers are espousing the benefits of a new artificial intelligence (AI) tool capable of identifying COVID-19-related conspiracy theories on social media.
The AI models how the conspiracy theories evolved and how they could someday help public health officials combat misinformation online.
“A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories,” Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at Los Alamos National Laboratory and co-author of the study published in the Journal of Medical Internet Research, said. “Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods.”
Authorities indicated the scope of work resulting in “Thought I’d Share First,” used publicly available, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and providing context for each via the first five months of the pandemic.
Additionally, researchers noted the four themes examined were 5G cell towers spread the virus; the Bill and Melinda Gates Foundation engineered or has otherwise malicious intent related to COVID-19; the virus was bioengineered or was developed in a laboratory; and the COVID-19 vaccines, which were then all still in development, would be dangerous. Researchers built AI models categorizing tweets as COVID-19 misinformation or not.
The analysis determined misinformation tweets contain more negative sentiment when compared to factual tweets, and conspiracy theories evolve by incorporating details from unrelated conspiracy theories, in addition to real-world events.