It’s bots, BOTS, I tell you. Russian ones at that. (Frickin’ Russians). Could it all really be so simple?
Hold your horses…or should I say fathiers?
This article is also available as a podcast:
This week, Star Wars related corners of the internet were inundated with a series of headlines purporting to have discovered the real reason that the franchise’s most recent episodic installment, The Last Jedi received unexpected levels of backlash. News outlets from the Hollywood Reporter to Sky News to The Guardian to The Mary Sue and beyond, all ran stories discussing the findings of a recent research study by Ph.D. student Morten Bay, touting the discovery of the cause of half of the divisive discourse surrounding Star Wars Episode 8.
These publications all ran with some variation of the rather clickbaity lead that Russian troll bots were responsible for half (or a large portion) of the negative criticism of the film. Additionally, there’s kind of a subtle undertone in some of these articles that seems to take the interpretation a step further, by implying that for this reason, because bots were involved, fan backlash and criticism somehow must be overstated. All the nastiness, disappointment, and disgruntled responses can be waved away and interpreted as Russian operatives getting in on high profile media coverage to further stir the pot, sew the seeds of American discord, and further the political agenda of mother Russia.
And that’s that. End of story.
Unless, of course, one actually reads Bay’s study, Weaponizing the haters: The Last Jedi and the strategic politicization of pop culture through social media manipulation. In which case the picture is somewhat different. So much so, that the paper’s author is actually speaking out. Outlets like CNET and The Washington Post have quoted Bay in articles published later in the week, in response to and pushing back against the ‘it’s all the bot’s fault’ narrative other publications ran with earlier in the week. (Some publications later edited their article titles).
Bay told The Post, “I really tried to be very careful in how I framed this. There’s no evidence Russians did anything unusual or meaningful.”
CNET quotes the study’s author as being, “moderately disappointed in some of the major media brands” because his study actually found that “the suspected Russian trolls are so few that it is basically the normal amount of Russian trolls you would expect to be present in a high-profile online debate.”
How could so many news outlets get this wrong? What does the study actually tell us?
In order to understand that, the first thing is to look at what kind of data was actually collected and how Bay conducted analysis. There are lots of ways to set up a research project and there were plenty of directions that could be taken. The author of this study is very clear that the study’s scope was limited to Twitter, and in fact, only reviewed publicly available tweets directed towards The Last Jedi’s director Rian Johnson. (No wonder he tweeted that the study was consistent with his experience online!) The author gathered over 1273 tweets aimed at Johnson during the 7 month period between the film’s release and mid-July 2018.
When working with qualitative data, such as tweet content, during analysis it is necessary to organize the narrative (or segments of narrative) into groups in order to look for trends and patterns. Since qualitative analysis is by it’s nature more subjective than quantitative, it’s important to have a clear set of rules to work with. This is called coding. It’s kind of like putting fish into different buckets.
Bay manually organized his collected tweets according to a specific criteria based on the language of the tweets and the kind of vocabulary used. Some qualitative analysis software exists, but in this case the author of the study did sentiment analysis without it (which isn’t unusual for a data set this small). In the end, of the total tweets collected, 967 were actually analyzed, removing errors or duplicate tweets. In this case, the study places the examined tweets into three buckets. Positive, negative, and neutral.
The bot contribution comes more into play when you look more closely at the bucket of tweets that Bay was able to code as ‘negative’. Note that it’s only one bucket, the negative one, and only for those tweets directed at Johnson-far from the ‘half of all negative activity’ related to the film that many of this week’s articles tried to suggest. Bay’s evidence shows that yes, bots, as well as sockpuppets (multiple accounts that appear to be operated by a single person) and political operatives were present and involved in the discourse that was tweeted addressing Rian Johnson, because in fact, on a platform like twitter, a story of this size is likely to always have some level of that kind of activity. Bay frames the study in the context of previous research done by other scholars on the state of digital discourse on social media platforms, beyond topics related to Star Wars.
This study, as well as the other work on the topic, indicates what many already anecdotally believe. Twitter has bots and operatives. As a norm. And if a story or trend grows enough, we’re likely always (or often) going to find some level of this artificial commentary. This points to some pretty interesting and sobering avenues for further study of social media, but it doesn’t tell the complete story about that nor, does it completely explain the circumstances that occurred following The Last Jedi. Usually, no single data point or study can definitively answer every question we have about a topic, especially when it’s complicated.
(If nothing else, the response to Star Wars Episode 8 can be described as complicated.)
Lots of data points working together can provide better understanding. Despite what some of these articles asserted, this single study doesn’t really explain the complete response and backlash to The Last Jedi. It provides insight into a lot of potential directions for further study. What if one created a different coding schemata and used it to review the same set of data? What about discourse, negative or not, that was on Twitter but not directed at Rian Johnson or any official Star Wars related handle? What about the discourse that occurred on other social media platforms? Facebook? Reddit? Tumblr? What about non-text based platforms, such as Youtube or podcasts? All of these are excellent and fascinating starting points for further research, and I really hope and encourage scholars with interest in digital and popular culture to pursue them.
And what about the clickbait reporting? These articles at best take a huge leap from how Bay’s paper frames things (suggesting statistical illiteracy) and at worst are being deliberately obtuse in order to quickly gain web traffic.
Neither explanation is very confidence inspiring.
Wanting people to read your article is totally valid and understandable, but how you make that happen is important. Twisting or mis-characterizing statistics is not a good look, and it doesn’t help deepen anyone’s understanding. As with all big questions, different areas of data can come together to provide context and a more complete picture from which to discuss and suggest possible conclusions.
Bay’s report is a start and that’s actually a really interesting story in and of itself! Without exaggeration. (Take note Hollywood Reporter.)
The response to The Last Jedi is endlessly interesting to those of us who care about Star Wars but as discussed, there is never going to be one definitive study or perfect data set that will totally reveal the all the underlying factors that influenced the response in a unified way. In fact what happened has a lot of factors and angles.
We as the reading public should be skeptical of those that try to explain it overly simply. And when in doubt, go back to the original source.