Republican Kari Lake lost to her Democratic opponent Katie Hobbs in the 2022 Arizona governor’s election. According to the election results published Nov. 14, vote tallies confirmed Hobbs beat Lake by a slim margin with 50.3% of the votes to Lake’s 49.7%.
On the night of Nov. 14 and into the next day, people online said Twitter MAGA bots – automated accounts that are part of the “Make America Great Again” network – were busy pushing for Lake by repeating the phrase “DO NOT CONCEDE,” despite the fact that she lost the race.
VERIFY investigated the bot activity around this phrase being used after the results of the governor’s race were announced.
- Arizona Secretary of State unofficial election results
- Report On The Investigation Into Russian Interference In The 2016 Presidential Election
- The Center for Information Technology & Society at University of California Santa Barbara (CITS)
- University of Southern California
- Hoaxy, an open-source analysis tool created by a team at Indiana University to conduct bot analysis around certain trends
- Botometer, an open-source analysis tool that gives bot scores to accounts
- Bot Sentinel, a platform developed to detect and track bots and untrustworthy Twitter accounts
- DHS Office of Cyber and Infrastructure Analysis
WHAT WE FOUND
Before we explore what happened in the race, let’s define what a bot is.
What is a bot?
Bots are typically automated “users” that are created to mimic human behavior through a computer. Social media bots are usually fake accounts made to look like real users. The bot’s creator will program the account to amplify certain messages, or deceive real users. For example, someone can set up a bot account on Twitter, and program it to retweet certain hashtags or phrases automatically.
The Center for Information Technology & Society at UC Santa Barbara (CITS) says, “bots spread fake news in two ways: They keep ‘saying’ or tweeting fake news items, or they use the same pieces of false information to reply to or comment on the postings of real social media users.”
During the 2016 election, Russian troll organization Internet Research Agency (IRA) operated a network of automated Twitter accounts (commonly referred to as a bot network) that enabled the IRA to amplify existing content on Twitter, a report from the Justice Department said. And in 2020, researchers with the University of Southern California determined bots reached hundreds of thousands of Twitter users in the weeks leading up to the election between Donald Trump and Joe Biden.
How apparent bots spread “DO NOT CONCEDE” on Twitter during Arizona race
Even before the gubernatorial results were announced in the Arizona race, officials with Maricopa County acknowledged election-related bot activity, tweeting a tongue-in-cheek warning to the accounts.
“SOCIAL MEDIA BOTS: Your disapproval is duly noted but your upvotes and retweets will not be part of this year’s totals. This is not meant as an affront to your robot overlords, it’s just not allowed for in Arizona law,” the tweet said.
Following the announcement of Lake’s defeat, “DO NOT CONCEDE” was trending on Twitter, with hundreds of Twitter users seemingly tweeting at Lake to not give up the race, despite the election results.
The phrase “do not concede” was first used widely by Virginia Thomas, wife of Supreme Court Justice Clarence Thomas, in texts to former President Donald Trump’s then-Chief of Staff Mark Meadows in November 2020 as she implored him to act to overturn the election, The Associated Press reported.
Lake, who has long supported unsubstantiated claims that Trump did not lose the 2020 election, tweeted about the election results on Nov. 14 and implying, without evidence, that the election hadn’t been impartial.
In response to those tweets from Lake, accounts replied telling her to not concede the race. “DO NOT CONCEDE” was trending on Twitter, and while some were authentic tweets, a large majority of them were bots, a VERIFY analysis found.
A graph from Hoaxy, an open-source analysis tool created by a team at Indiana University to conduct bot analysis around certain trends, shows exactly how the conversation around “DO NOT CONCEDE” was amplified across Twitter.
When VERIFY searched “DO NOT CONCEDE” on Hoaxy, it reported more than 900 accounts using the phrase.
The tool generates a “bot score” for these types of trending topics. Bot scores are calculated using a machine learning algorithm trained to classify the level of automation an account presents. Accounts identified as blue or green dots have bot scores that show the account is likely run by a human. Yellow accounts are questionable, while orange and red are accounts that exhibit some sort of automation.
Thirty-one percent of accounts using the “DO NOT CONCEDE” phrase fell into the highly suspicious categories of users (orange or red), according to Hoaxy data. Here is a look at accounts that are just red:
One of the first accounts to tweet “DO NOT CONCEDE” came from the __TEAM_USA account, which is identified in the smaller module on the left of the Hoaxy graph, surrounded by red dots.
This account was created in April 2022, according to the Twitter profile. According to Botometer, which was also created by Indiana University and gives bot scores, the account appears to be run by a human, but many of its followers are bots. Those bots are amplifying the message spread by the __TEAM_USA account by retweeting the original tweet.
Sharing the same phrase over and over again is a tactic that is most successful for bots because “average social media users tend to believe what they see or what’s shared by others without questioning … so bots take advantage of this by broadcasting high volumes of fake news and making it look credible,” CITS says.
Bot Sentinel, a platform developed to detect and track bots and untrustworthy Twitter accounts, shows that at least 24% of the accounts that retweeted the __TEAM_USA were created in November 2022, which is also signs of suspicion – that the accounts were created during or before the midterm elections.
In contrast, if you search for just “Arizona results” using the Hoaxy tool, of more than 900 accounts that tweeted specifically about “Arizona results” – only 13% of those were under the suspicious category of users (orange or red).
Examples of accounts that show bot-like qualities
This Twitter account, for example, is one that retweeted the tweet. While it doesn’t have a lot of followers, the biography contains suspicious characters, evidence of spam activity and the retweet rate suggests automation.
The account also lists a location in California, not in Arizona, which gives more evidence the account is suspicious.
Another Twitter account with very few followers tweets in high volume, responds to one account many times and typically posts only memes or screenshots. Those are all signs of bot-like behavior. The cover photo and profile picture are also generic symbols.
This account is suspicious because it was created in November 2022, doesn’t have any profile or cover photos and also retweets in high volume.The user name is also generic and contains a lot of numbers.
This account also retweets in high volumes and shows the same characteristics – generic user name and no visible signs of identity (photos, etc.) on the profile.
How to spot bots on social media
Bots don’t only exist on Twitter. The Department of Homeland Security Office of Cyber and Infrastructure Analysis offers tips on how to spot a social media bot:
Here are five examples of typical bot trends that can exist on any social network, and can help you to determine if an account mentioning you or posting is a bot or not.
- Click or like farming: An account that promotes a website through liking or reposting content. These bots also allow people to buy similar fake accounts to boost their number of followers.
- Hashtag hijacking: An account that uses hashtags to attack users using the same hashtag.
- Repost storms: An account posts something, and then a group of bots instantly reposts
- Sleeper bots: An account that is dormant and then posts suddenly over a short period of time.
- Trend jacking: An account that acts similarly to hashtag hijacking, but uses trending topics to attack an intended targeted audience.
Other things to look for when determining if an account is a bot:
- Look at the profile picture – is it generic or does it look authentic?
- What is the account name? – If it is a string of numbers and letters, that’s an indication the account was auto-generated
- When was the account created?
- What does the account post? Does it only share or retweet, or does it form legible sentences?
If you have questions or want something, or any account, looked into, the VERIFY team is here for you. Send your questions to firstname.lastname@example.org.
More from VERIFY: How to spot manipulated videos, including deepfakes and shallowfakes