The A.I.’s Have It: Trump’s Cyber Supporters

sam-woolley-cropFollowing the 2016 Election, many have scrutinized the communities that raised Donald Trump to the U.S. Presidency. A recent study by researchers at Corvinus University, Oxford, and the University of Washington reveals that one of the most vocal of these groups, however, is not exactly human. The report documents how Twitter bots accounted for nearly a quarter of all hashtagged election postings.

Bots are software based accounts masquerading as genuine users; they appear legitimate in design, broadcast content, and interact with other social accounts. This makes them effective tools for the spread of automated propaganda.

Samuel Woolley, a University of Washington Communication doctoral candidate who contributed to the report, explains that “political bots have been used globally as instruments for massively, and computationally, ramping up efforts to threaten journalists, interrupt communication amongst activists, and manipulate public opinion.” Moreover, although bots have been wielded as political weapons in countries like Turkey, Syria, and Russia, project researchers theorize that their most pervasive use occurred during the 2016 U.S. election.

Bot accounts can tweet an almost inexhaustible amount per day, which drives up the numbers around content like political hashtags. Woolley and his colleagues found that around 19 million Twitter bots posted in support or either Clinton or Trump in the week leading up to Election Day. Trump bots, however, outnumbered Clinton bots 5:1. “Trump bots also worked in a more sophisticated fashion,” Woolley stated, pointing in particular to how these bots colonized pro-Clinton hashtags (like #ImWithHer) to spread damaging misinformation to potential voters.

Unfortunately, combating this programmed plague is as tricky as the bots at the center of it. In addition to the challenges of distinguishing between more sophisticated bot accounts and people, Woolley notes that “bots are an integral, and infrastructural, part of Twitter,” especially for advertisers. Moreover, there is the question of protected speech and the various legal entanglements into which social media platforms can fall when they go after political bot accounts.

To date, Twitter and Facebook have argued that as “technology companies,” they do not have the authority to closely curate content, but Woolley and many others cite this as a flimsy excuse. “This defense simply doesn’t work,” Woolley argues, “Especially not when these automated accounts are used as proxies for attacking democratic activists and journalists and to spread fake news stories.”

In spite of this reluctance on the part of social platforms to act against bots, savvy users can still shield themselves from automated propaganda. Woolley’s research has identified three classes of markers users can employ to identify bot accounts:

  1. Time-oriented (temporal) Information: How often and how much are accounts broadcasting? Are they performing in a way that is beyond human capability?
  2. Content-oriented (semantic) Information: Can the account effectively communicate with other users? Does the content make sense?
  3. Social-oriented (network) Information: How diverse are the account’s network connections? Woolley and his team found that bot accounts only follow one another and have a tendency to exist on the edge of conversations before jumping in with outside information

Although it is difficult to quantify how much the misinformation spread through automated propaganda influences political action, there is hope that the political future will not be determined by social media’s bot overlords.