UAA faculty and students use machine learning to debunk COVID-19 misinformation

by Matt Jardin  |   

Computer screen displaying ChatGPT
Open AI's ChatGPT generates a response to a prompt for COVID-19 misinformation. At UAA, computer science Professor Shawn Butler has been using machine learning to debunk COVID-19 misinformation on social media. Butler’s efforts are part of the Division of Population Health Sciences and Department of Journalism and Public Communication’s mission to combat COVID-19 misinformation on public-facing Facebook pages with scientifically accurate information from credited sources through its ongoing Alaska Public Health Information Response Team project. (Photo by James Evans / University of Alaska Anchorage)

Since the launch of the cutting-edge chatbot ChatGPT in late 2022, the potential applications of artificial intelligence and machine learning have dominated the news.

At UAA, computer science Professor Shawn Butler, Ph.D., has been using machine learning to debunk COVID-19 misinformation on social media. Butler’s efforts are part of the Division of Population Health Sciences and Department of Journalism and Public Communication’s mission to combat COVID-19 misinformation on public-facing Facebook pages with scientifically accurate information from credited sources through its ongoing Alaska Public Health Information Response Team project.

“The damage done with misinformation, especially on social media, is something we've never seen before,” said Butler. “We almost eradicated polio until people started saying they’re not going to take the vaccine because of something they read online.”

Identifying and responding to misinformation on the internet can be a daunting and time-consuming process. So Butler and her team developed a way to use machine learning to assist in automatically identifying COVID-19 misinformation through natural language processing analysis, where a model is fed a data set of text labeled as “misinformation” or “not misinformation” with point values assigned to certain keywords or phrases to train the model to identify misinformation that is not labeled.

Currently, Butler’s model boasts an 80% accuracy rate when identifying misinformation and a 50% accuracy rate when identifying what isn’t misinformation — a number she is confident will improve after providing the model with a much larger labeled data set.

Additionally, another model helps determine the effectiveness of the response team’s efforts by evaluating the change in sentiment of the replies after a member of the team responds to misinformation with accurate information. According to Butler, those resulting conversations indicate a positive change in sentiment. 

Looking ahead, Butler hopes to use machine learning to prebunk misinformation before anyone has the chance to even consider it, bringing to mind the old adage that “a lie can travel halfway around the world while the truth is putting on its shoes.”

“In controlled situations, research shows that pre-bunking is more effective than debunking,” said Butler. “If somebody knows what the scam is, it’s easier for them to see it rather than be convinced once they have already fallen for it.”

Creative Commons License "UAA faculty and students use machine learning to debunk COVID-19 misinformation" is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.