In Even good bots fight, a paper written by Oxford Internet Institute researchers and published in PLOS One, the authors survey the edits and reverts made by Wikipedia’s diverse community of bots, uncovering some curious corners where bots — rate-limited by Wikipedia’s rules for bots — slowly and remorseless follow one another around, reverting each other.
15% of all Wikipedia edits come from bots: "They identify and undo vandalism, enforce bans, check spelling, create inter-language links, import content automatically, mine data, identify copyright violations, greet newcomers, and so on."
Wikipedia bots aren’t planned by a central authority, though: they’re a bottom-up phenomenon, reflecting an individual Wikipedean’s theory about how to improve the project. Just as editors can agree with one another in their manual edits, they can also project those disagreements onto Wikipedia using automated software agents.
Bots’ edits are primarily reverted by other bots (not by humans), with the number of bot-on-bot reversions increasing steadily, at a faster clip than the rate of growth of bots themselves — that is, the bots disagree with each other more than they used to. But bot-on-bot reversion still occurs at a lower rate than human-on-human reversion; bots may disagree, but not as much as people do. The authors don’t get into detail on this, but I wonder if that’s because so many bot edits are technical in nature (interlinking articles, correcting common spelling errors, etc), while human editors are more likely to undertake substantive edits.
The crux of these revert wars is somewhat buried in the paper, but here it is: "we found that most of the disagreement occurs between bots that specialize in creating and modifying links between different language editions of the encyclopedia. The lack of coordination may be due to different language editions having slightly different naming rules and conventions." That is, the reversions reflect botmasters’ lack of understanding of the local cultures of Wikipedia versions in unfamiliar languages.
Wikipedia is perhaps one of the best examples of a populous and complex bot ecosystem but this does not necessarily make it representative. As Table 1 demonstrates, we have investigated a very small region of the botosphere on the Internet. The Wikipedia bot ecosystem is gated and monitored and this is clearly not the case for systems of malevolent social bots, such as social bots on Twitter posing as humans to spread political propaganda or influence public discourse. Unlike the benevolent but conflicting bots of Wikipedia, many malevolent bots are collaborative, often coordinating their behavior as part of botnets . However, before being able to study the social interactions of these bots, we first need to learn to identify them .
Our analysis shows that a system of simple bots may produce complex dynamics and unintended consequences. In the case of Wikipedia, we see that benevolent bots that are designed to collaborate may end up in continuous disagreement. This is both inefficient as a waste of resources, and inefficacious, for it may lead to local impasse. Although such disagreements represent a small proportion of the bots’ editorial activity, they nevertheless bring attention to the complexity of designing artificially intelligent agents. Part of the complexity stems from the common field of interaction—bots on the Internet, and in the world at large, do not act in isolation, and interaction is inevitable, whether designed for or not. Part of the complexity stems from the fact that there is a human designer behind every bot, as well as behind the environment in which bots operate, and that human artifacts embody human culture. As bots continue to proliferate and become more sophisticated, social scientist will need to devote more attention to understanding their culture and social life.
Even good bots fight: The case of Wikipedia [Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi and Taha Yasseri/PLOS One]