It takes only a fraction of a second to hit the retweet button on Twitter. But if thousands of people all retweet at once, a piece of information 140 characters long can go viral almost instantly in today’s Internet landscape.
If that information is incorrect, especially in a crisis, it’s hard for the social media community to gain control and push out accurate information, new research shows.
University of Washington researchers have found that misinformation spread widely on Twitter after the 2013 Boston Marathon bombing despite efforts by users to correct rumors that were inaccurate. The researchers presented their findings at iConference 2014 in Berlin March 4-7, where they received a top award for their related publication. (The network graph shows relationships among the 100-most prevalent hashtags used on Twitter after the Boston Marathon bombing. The connecting lines represent hashtags that were in the same tweet. #boston was dropped from the graph because it connected with every other tag.)
On April 15, 2013, two explosions occurred near the finish line of the Boston Marathon, killing three people. Three days later, the FBI released photos and video surveillance of two suspects, enlisting the public’s help in identifying them. Massive speculation broke out on mainstream media and social media sites, particularly Twitter. After a shooting on the Massachusetts Institute of Technology campus and a manhunt, one of the suspects was shot dead and the other arrested the evening of April 19.
The entire time, a flurry of tweets was published on Twitter using hashtags such as #boston, #prayforboston, #mit and #manhunt. A number of incorrect rumors surfaced that spread rapidly before corrections started appearing. And when they did, corrective tweets were minimal when compared with the volume of tweets that spread incorrect information.
“We could see very clearly the negative impacts of misinformation in this event,” said Kate Starbird, a UW assistant professor in the Department of Human Centered Design & Engineering. “Every crisis event is very different in so many ways, but I imagine some of the dynamics we’re seeing around misinformation and organization of information apply to many different contexts. A crisis like this allows us a chance to see it all happen very quickly, with heightened emotions.”
Starbird, whose research looks at the use of social media in crisis events, began recording the stream of tweets about 20 minutes after the finish-line bombing. Her team, with the help of collaborator Bob Mason in the UW’s Information School, later got the complete dataset – 20 million tweets – to fill in gaps when the sheer volume of tweets coming in was too great to capture in real-time.
Researchers from the UW and Northwest University in Kirkland, Wash., analyzed the text, timestamps, hashtags and metadata in 10.6 million tweets to first identify rumors, then code tweets related to the rumors as “misinformation,” “correction” or “other.”
For example, they analyzed the rumor that an 8-year-old girl had died in the bombings. The researchers first identified tweets containing the words “girl” and “running,” then whittled that down to roughly 92,700 that were related to the rumor. They then found that about 90,700 of these tweets were spreading misinformation, while only about 2,000 were corrections. While the Twitter community offered corrections within the same hour the rumor appeared, the misinformation still persisted long after correction tweets had faded away.
“An individual tweet by itself is kind of interesting and can tell you some fascinating things about what was happening, but it becomes really interesting when you understand the larger context of many tweets and can look at patterns over time,” said Jim Maddock, a UW undergraduate student in Human Centered Design & Engineering and history who did most of the computational data analysis for this project.
A previous study analyzing the spread of misinformation on Twitter during the 2010 Chile earthquake found that Twitter users actually crowd-corrected the rumors before they gained traction. But the earlier study excluded all retweets, which the UW team found to be a significant portion of the tweets spreading misinformation.
The UW researchers hope to develop a tool that could let users know when a particular tweet is being questioned as untrue by another tweet. The real-time tool wouldn’t try to glean whether a tweet is true or untrue, but it would track instances where a tweet is contested by another.
“We can’t objectively say a tweet is true or untrue, but we can say, ‘This tweet is being challenged somewhere, why don’t you research it and then you can hit the retweet button if you still think it’s true,’” Maddock said. “It wouldn’t necessarily affect that initial spike of misinformation, but ideally it would get rid of the persisting quality that misinformation seems to have where it keeps going after people try to correct it.”
The team currently is looking at the relationship between various website links within tweets and the quality of information spread during the Boston crisis. They also are conducting interviews with people who were close to the scene in 2013 to see what effect proximity had on information sharing.
Paper co-authors are Mason and Mania Orand of the UW and Peg Achterman of Northwest University.
The research was funded by the National Science Foundation.
Story by Michelle Ma, News and Information, March 17, 2014