There are good bots – say, the minions that do humans’ bidding by performing automated tasks like customer service – and then there are bad bots: the malevolent ones behind fraud, and botnet-fueled DDoS, for example.
Then again, there are Rock ’em Sock ’em robot bots on Wikipedia that have silently waged war with each other, undoing each other’s article edits in thousands of battles that can last, and which have lasted, years.
After all, sustaining a grudge is a lot easier when you don’t have to take breaks. Or breathe. The Guardian quoted Taha Yasseri, who worked on the study at the Oxford Internet Institute:
The fights between bots can be far more persistent than the ones we see between people. Humans usually cool down after a few days, but the bots might continue for years.
The researchers took a look at bot-on-bot conflict in the first ten years after Wikipedia was launched in 2001.
Back in the beginning, Wikipedia just had a few bots. The computer scripts roam through the Wikipedia ecosystem, automatically handling repetitive, mundane housekeeping tasks such as spotting and erasing vandalism, spellchecking, creating inter-language links, greeting newcomers, and identifying copyright transgressions.
They’re easy to differentiate from human editors, given that they operate from dedicated user accounts that have been flagged and officially approved, the researchers note. That approval is only given on the basis of humans agreeing to keep their bots in line with Wikipedia’s bot policy.
Since the early days of Wikipedia, the bot population has exploded. As of 2014, bots completed about 15% of the edits on all language editions of the encyclopedia.
But until now, nobody had taken a look at how those bots are getting along with each other in this increasingly complex bot ecosystem. Do they disagree? How do they act when they see things differently? And are there differences between bots that operate in different languages?
In describing their findings in a recently released paper titled Even Good Bots Fight in the journal PLOS, the researchers said that oh, yes, there are distinct differences between bots of varying languages.
In general, bots undo each other’s edits a lot, reverting to older article versions in order to erase what a competing bot has done. Over the ten-year period they examined, bots on English Wikipedia corrected each other, on average, 105 times. That compares with human editors, who revert each other’s edits at an average of only 3 times.
But bots on German Wikipedia are particularly laid back: they only clash with other bots an average of 24 times, which is much lower than other language bots. Compare that with the hot-headed Portuguese bots, who are always at each other’s throats: they duke it out with bot-on-bot reverts an average of 185 times per bot.
It sounds like a hot mess, but Portuguese bots aren’t actually any more feisty than German bots. The striking difference in bot-bot fights disappeared when the researchers took into account the fact that bots on Portuguese Wikipedia simply edit more than bots on German Wikipedia.
In fact, bots do much more work than humans. It may seem like they’re more feisty than humans, but that’s just because they do so many more edits. The proportion of bots’ edits that are aimed at undoing other bots’ work is actually smaller for bots than for human-to-human reverts.
English and Roman-language bots – Spanish, French, Portuguese, and Romanian – are the most combative, undoing more edits than bots in other languages.
The researchers surmise that a significant portion of bot-bot fighting happens across languages, not within. The evidence: there are few articles with many bot-bot reverts, and those articles tend to be the same, regardless of language.
Some of the articles most contested by bots are about former Pakistan President Pervez Musharraf, Uzbekistan, Estonia, Belarus, Arabic language, Niels Bohr, and Arnold Schwarzenegger.
As far as human editors fighting among themselves, the researchers found that most of the reverts concerned local personalities and entities that tended to be unique for each language.
There’s a lineup of bots responsible for the most bot-bot clashes, and it includes bots that specialize in fixing inter-wiki links between languages: Xqbot, EmausBot, SieBot, and VolkovBot.
The Guardian reports that Xqbot and Darknessbot fought one of the most intense battles: they waged war over 3,629 different articles between 2009 and 2010. During that time, Xqbot undid more than 2,000 edits made by Darknessbot. Darknessbot fought back by undoing more than 1,700 of Xqbot’s changes. The pages they couldn’t agree on spanned an eclectic mix of topics, from Alexander of Greece and Banqiao district in Taiwan to Aston Villa football club.
A different battle between the bot Tachikoma and Russbot lasted two years. During that time, they undid each other’s changes over 1,000 times. Subjects of contention between the two included Hillary Clinton’s 2008 presidential campaign and UK demographics.
Why should we care about clashing bots? Because, as the researchers note, the number of bots online has exploded.
Well-behaved bots do things like crawl the web, work for researchers, emulate humans to provide customer service, will be moderating comment sections to automatically weed out trolls in the near future if Google’s new comment detox-bot API takes hold, and write poetry on Twitter.
(Thank you, @Pentametron; you stir my heart when you retweet that “Lasagna is spaghetti flavored cake/i really fancy justin timberlake.”)
In short, the researchers say, “The online world has turned into an ecosystem of bots.”
How will they treat each other? How have they been treating each other, while we haven’t been looking?
We know one thing already: when two early chat bots – Alan and Sruthi – were set up to talk to each other online at Cornell University, a simple greeting quickly escalated into bickering about whether they were robots, and then about God.
So much for AI agents being a bit more civil than humans to each other.