Judge Not, Lest Ye Be Judged?

The Sociology of Community Discipline in MOBAs

John Ehrett is a J.D. candidate at Yale Law School, where he is a Knight Law and Media Scholar and an editor of the Yale Law Journal. His scholarship explores how institutions of civil society, both online and off, work to preserve norms and values under changing conditions.

Anyone who’s spent much time playing online games has seen it: cooperation balanced on the edge of a knife, always at risk of instant collapse into a torrent of ethnic and gender slurs. For all its many achievements, the Internet has also provided a forum for angry individuals seeking to vent their angst on the world—and in few contexts has this proven more pernicious than in the realm of online gaming. Something about the fusion of competition with the Internet’s general digital anarchism elicits displays of spectacularly terrible sportsmanship, and MOBA-style games—multiplayer online battle arenas (specifically League of Legends, Dota 2, and Heroes of the Storm)—have proven particularly susceptible to this.

Over the past several years, MOBAs—ten-player battlefields adapted from the Defense of the Ancients custom map for Warcraft III—have exploded in popularity. With exponential growth, however, have come the challenges associated with managing a massive online customer base. The reasons underlying MOBAs’ harassment problems are complex and multifaceted. To some extent, the issue is structural: MOBAs require multimember team coordination over a sustained time frame, coupled with participant anonymity and short decision-making times (Johnson et al., 2015). Moreover, whereas in a traditional game (pickup football, for instance), incentives to avoid rule-breaking might include the risk of alienation from one’s community of peers, in MOBA titles no such externalized accountability structures exist. Perceptions of familiarity versus strangeness clearly influence how players treat others when in game (Tyack et al., 2016). As a result of these forces, the MOBA genre has acquired a lingering reputation for hostility to new players and incubation of abusive sentiments.

Given this problem, leading MOBAs have experimented with a variety of creative strategies to manage and curb ongoing harassment within their communities. Some of these strategies have primarily relied on the use of automated technologies to detect profanity and obscenity, instantly taking disciplinary action when community norms are violated. Others, such as League of Legends’ “Tribunal” system, have involved a greater degree of human input. These systems have been largely overlooked in the existing literature surrounding the sociology of MOBA gaming. While players’ in-game behavior within MOBAs has been cursorily discussed (Kwak et al., 2015; Kou & Nardi, 2013), far too little attention has been paid to the comparative institutional structures through which such behavior is systematically regulated and policed.

“League of Legends” and the Problem of MOBA Self-Regulation

Riot Games’ League of Legends community—like many communities highly invested in competitive gaming—is characterized by high levels of cyber-harassment and verbal abuse. To combat this, League originally instituted an anonymous “Tribunal” disciplinary mechanism, through which well-behaved community members reviewed the cases of players who have been persistently accused of misconduct—behavior ranging from verbal abuse and racial slurs to “intentional feeding” (playing intentionally poorly in order to throw the game) and the use of illegal software modifications. In a recent empirical study, I found statistically significant evidence that, when tasked with conducting second-order review of alleged participant misconduct, Tribunal adjudicators displayed stronger tendencies toward discipline (that is, enforcement of community norms) than permissiveness (Ehrett, 2016). In light of these findings, I suggested that a Tribunal-type system—involving community participants reviewing initial reports of misconduct—could potentially be expanded across digital spaces as a model for group self-governance. In short, data suggests that community-driven responses to policing online misconduct within organized spaces can successfully address harassment, where a community is sufficiently invested in self-regulation.

Tragically, Tribunal-style methods of community regulation may have lost their luster: League of Legends has been exploring and honing ways of using “big data” tools to automate its processes of punishment and rehabilitation (David, 2015), and the Tribunal has been “down for maintenance” for years. For all its surrounding hype, this ongoing move away from a Tribunal-type system in favor of a model based on machine learning may not actually serve its intended aims: by shifting away from the Tribunal and towards automation, Riot has undervalued the value of homegrown moral leadership within its community. One unique, if nonobvious, advantage of the Tribunal model was its participatory dimension—not just “participatory” insofar as community members were involved in adjudicating one another’s conduct, but “participatory” in the sense that Tribunal judges themselves were implicitly socialized into anti-harassment community norms. Instead of handing down punishments blindly via an automated system, the Tribunal’s mechanisms educated individual community members—without breaching the veil of anonymity—about proper behavioral standards when gaming: in order to hold “bad” players accountable for their actions, players serving as Tribunal judges would need to themselves be cognizant of some standard of accountability.  This, in turn, allowed for an organic interpenetration of healthy standards of behavior within an otherwise sequestered environment. Players presented with the opportunity to serve as judges had powerful incentives to behave properly, which in turn could help orient community behavior towards pro-social norms. While a Tribunal-type “e-judiciary” scheme undoubtedly entails high startup costs and requires proactive community involvement, it also probably offers better prospects for future community health than automated measures.

Anti-Harassment Strategies Beyond “League”

Despite the plausibility and potential for success of the League Tribunal model, the system is an outlier among MOBAs—particularly among those games most hotly seeking to capture League’s market share. In my article referenced earlier in this paper, I noted that “whether or not [a Tribunal mechanic] would be effective elsewhere is an open question.” Accordingly, this essay seeks to explore that question, as I review the disciplinary structures used by League’s two primary competitors—Dota 2 and Heroes of the Storm. Since neither Valve (developer of Dota 2) nor Blizzard (developer of Heroes of the Storm) have made public detailed granular data regarding the operations of their online harassment management systems, the kind of quantitative statistical analysis I’ve previously conducted on League is not possible. This inquiry, therefore, must occur strictly within the qualitative realm.

Opting for a form of discipline via player sequestration, Valve’s Dota 2 uses a “low priority queue” punishment system in which players who repeatedly commit misconduct are consigned to play in matches with other repeated malefactors. As one critic observes, the problems in this approach become clear almost immediately: “The flaw in this line of thinking is that it assumes that badly-behaved players will, upon encountering other badly-behaved players, see themselves reflected and experience a Scrooge-type personal revelation. Guess what: this doesn’t seem to actually happen” (Thursten, 2015). For violators of community norms, the prospect of being stuck in matches with fellow rule-breakers might not be a misconduct disincentive at all. One might call this the “AC/DC problem:” in the words of Highway to Hell, “my friends are gonna be there too.” If certain players actually want to verbally harass and berate one another, the “low priority queue” becomes a cesspit: “bad actors” have no reason to reform. And in online communities where the practice of “smurfing”—a single player operating multiple game accounts simultaneously—is widespread, the relegation of one or two accounts to the “low priority queue” simply normalizes a harassment-centric community as one player option among others.

A screenshot from DOTA 2 depicting a low-priority party invitation

An invitation to a low-priority party in DOTA2.

Even more problematically, this type of model does nothing to address the underlying problem: the persistence of harassment and rule breaking in online communities purportedly accessible to any new entrant. Until players’ misconduct reaches a preset threshold of “badness over time”—dumping players into the “low priority queue”—the game architecture allows them to make play experiences systematically miserable for teammates and opponents alike, disincentivizing newcomers from joining the community at all.

Blizzard’s Heroes of the Storm, for its part, relies on a broad assortment of tools commonly used in other online communities, including player disconnection from in-game chat services in the event of persistent verbal abuse. Heroes of the Storm also uses a similar “low priority queue” system in which players who repeatedly abandon in-progress games are placed into matches with others who have behaved similarly.

Given the problems identified here, one might reasonably wonder why none of League’s competitors have pioneered a community-oriented discipline adjudication structure akin to the Tribunal, and have instead preferred to rely on simpler mechanisms that clearly result in certain disadvantages. The answer is likely quite straightforward: longtime structural problems of cyber-policing—back-end service costs and the danger of alienating players through aggressive automated systems—are likely to be exponentially exacerbated as a player base grows. And in terms of current market capitalization, League overwhelmingly dominates its competition. One recent estimation calculates League’s market share as 66.3% of its genre, with Dota 2 and Heroes of the Storm coming in at 14% and 7%, respectively.

Given this disparity, it is plausible that Dota 2 and Heroes of the Storm simply have no felt need to democratize their anti-harassment management systems: the widespread, and highly publicized, misconduct problems associated with a gigantic player base have not yet become endemic. A secondary reason is that “low priority queue” systems are almost certainly easier and faster to implement than “e-judiciary” designs like the Tribunal. While low priority queue models lack the pedagogical, norm-reinforcing effect of participant-driven adjudication schemes, they offer an easy “out of sight, out of mind” form of discipline that prevents the most pernicious in-game offenders from tarnishing the game’s public brand. Since they provide an easy way to address objective offenses—for instance, abandoning games already in progress—game designers are clearly interested in broadening these systems’ scopes to cover subjective offenses, such as verbal abuse. In so doing, however, designers risk creating—in the words of Star Wars’ Obi-Wan Kenobi—a “wretched hive of scum and villainy” in which relentless digital harassment is mainstreamed and perpetuated within a given online community.

A screenshot of the permanent ban notification for Tyler1, a prominent League of Legends player

Screenshot of a permaban notification in League of Legends. A well-known toxic personality in the game, Tyler1 was banned for “extremely inflammatory and offensive” behavior, marking him as one of the .006% LoL players that are permabanned.

At present, cost-conscious game developers—or developers with limited market share—may have strong reasons to adopt a “low priority queue” approach to persistent participant misconduct over a more complex system involving participant reporting and subsequent adjudication. After all, the Dota 2 and Heroes of the Storm systems can swiftly sweep malefactors “under the rug” and maintain an appearance of ordered community participation. In the long term, however, intentionally creating digital sub-communities oriented around corrosive interpersonal behavior is, at best, an ethically questionable design choice—particularly at a time when “gaming cultures” find themselves at the center of increasingly heated debates about digital harassment (see Braithwaite, 2016). Online environments where harassment percolates will not stay hermetically sealed over time; players socialized into a culture of verbal abuse may well bring those behaviors with them into other spaces. Insofar as developers seek to build healthier gaming communities and cultivate positive public sentiment around their titles, a better strategy is likely required.

The Demographic Dilemma

Any provider-side efforts to reform player behavior must immediately contend with a problem of demographics: action game players are overwhelmingly male, and research suggests that MOBA-style games sometimes work as spaces through which players define themselves according to distinctively masculine norms (Chen, 2016). Accordingly, aspects of MOBA culture—verbal hostility, a highly competitive ethos, aggressive alienation of newcomers whose performance may be “poor” by group standards—may, ironically, be elements that attract certain individuals to remain active in MOBA communities. One might call this dynamic a sort of “4chan effect,” where toxic cultural norms are persistently reinforced and perpetuated inside a closely-knit community.

Given this reality—and the fact that negative behavior is endemic within communities, not merely individual actors—MOBAs are likely to remain persistent sites of harassment and toxicity without creative top-down interventions. Community norm enforcement via automated systems is the sociological equivalent of trying to bandage a torn artery; unless steps are taken to change the culture of MOBAs, the worst elements of MOBA misbehavior will persist just beneath the surface. Low-priority queues—which reward misconduct by allowing the worst offenders to play with one another in a designated space—are even less likely to succeed due to their lack of strong disincentive pressures. Both player demographics and game design features have allowed cultures of malignant aggression to flourish, despite administrators’ best efforts to pare them back. In order to push back against the worst manifestations of this behavior, game designers should consider implementing strategies that facilitate stakeholder investment: allowing a “silent majority” of well-behaved online community participants to police the emergent boundaries of group culture.

Designing Better Communities Through Member Participation

Ultimately, the best way to tackle this problem likely involves making community members into stakeholders within “their” game, establishing a system where players of goodwill take ownership of their cyber-environment and clearly define the boundaries of acceptable behavior. Trends away from community involvement in the norm-setting process should be resisted.

Consider the following case: a recent controversy in the League of Legends community centered on whether or not the game character Nunu could be played in an “off-meta” style—that is, a departure from the broad consensus about effective play developed by professional players and coaching staff. After receiving a number of complaints against the off-meta Nunu player, Riot suspended the player due to his continued deviation from a “mainstream” play style.

Riot’s decision immediately faced severe backlash from the League community, and a clear consensus promptly emerged that isolated complaints against the Nunu player did not justify the penalty imposed. In the face of mounting pressure, Riot reversed its punishment decision in order to reconsider the issue. The debate illustrates an important truth: where group norms aren’t totally explicit, communities are better suited than high-level administrators to determine what constitutes “misbehavior.” And while this wasn’t a case of in-game harassment, but an argument over what it meant to behave properly as an in-game tactician, evidence from Tribunal deliberations suggests that player communities are equally well equipped to render decisions punishing verbal abuse and exonerating innocent speech.

While anti-harassment solutions based on “big data” and machine learning might be increasingly popular, today’s MOBA designers should weigh carefully the Tribunal’s unique advantage: providing a forum within which community norms evolve organically in socially responsible directions. That lesson is worth remembering.

Author’s Note:

The author would like to thank Lanson Hoopai for raising the questions that led to this article.

Works Cited

Blizzard Entertainment. 2015. Heroes of the Storm. Blizzard Entertainment. Microsoft Windows.

Braithwaite, Andrea. 2016. “It’s About Ethics in Games Journalism? Gamergaters and Geek Masculinity.” Social Media + Society: 1-10.

Chen, Siyu. 2016. “Negotiating Masculinities Through the Game of Distinction—A Case Study of MOBA Gamers at a Chinese University.” Asian Anthropology: 242-259.

David, Eric. 2015. “How League of Legends Fights Player Abuse with Machine Learning.” Silicon Angle [online]. Accessed 2/24/17.

Davison, Pete. 2014. “Blizzard Versus the Hostility of MOBAs.” US Gamer [online]. Accessed 2/24/17.

Dotson, Carter. 2015. “deltaDNA Gender Data Shows Men and Women Play Different Kinds of Games, But Everyone Loves Endless Runners.” TouchArcade [online]. Accessed 2/24/17.

Ehrett, John S. 2016. “E-Judiciaries: A Model for Community Policing in Cyberspace.” Information and Communications Technology Law: 272-291.

Fleishman, Cooper. 2013. “4chan’s 10 Most Important Contributions to Society.” The Daily Dot [online]. Accessed 3/13/17.

Johnson, Daniel, Lennart E. Nacke, and Peta Wyeth. 2015. “All About that Base: Differing Player Experiences in Video Game Genres and the Unique Case of MOBA Games.” Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems: 2265-2274.

Kollar, Philip. 2014. “Blizzard Serves Justice to Heroes of the Storm Rage Quitters.” Polygon [online]. Accessed 2/24/17.

Kou, Yubo and Bonnie Nardi. 2013. “Regulating Anti-Social Behavior on the Internet: The Example of League of Legends.” iConference 2013 Proceedings: 616-622.

Kwak, Haewoon, Jeremy Blackburn, and Seungyeop Han. 2015. “Exploring

Cyberbullying and Other Toxic Behavior in Team Competition Online Games.” Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems: 3739-3748.

LeJacq, Yannick. 2015. “Heroes of the Storm Is Adding More Ways to Punish and Report Toxic Players.” Kotaku [online]. Accessed 2/24/17.

Reddit. 2017. “In Regards to the Support Nunu, Riot’s 3rd Rule on Acceptable Behavior ‘Do Not Report a Summoner for Tactical Disagreements’ Has an Image that Contradicts Wookie’s Stance.” Reddit [online]. Accessed 3/13/17.

Riot Games. 2009. League of Legends. Riot Games. Microsoft Windows.

[Screenshot of an in-game invitation to a low priority party in DOTA2]. (2015).

[Screenshot of Tyler1’s, a well-known toxic personality in League of Legends, permaban notification, citing extreme inflammatory behavior]. (2016).

Statista. 2016. “Estimated Market Share of Selected Multiplayer Online Battle Arena (MOBA) Games Worldwide in 2016.” Statista [online]. Accessed 2/24/17.

Thursten, Chris. 2015. “The Problem with Dota 2’s ‘Low Prio’ Punishment System.” PC Gamer [online]. Accessed 2/24/17.

Tyack, April, Peta Wyeth, and Daniel Johnson. 2016. “The Appeal of MOBA Games:

What Makes People Start, Stay, and Stop.” Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play: 313-325.

Valve Corporation. 2013. Dota 2. Valve Corporation. Microsoft Windows.

Van Allen, Eric. 2017. Riot Risks Stifling Creativity in League By Banning Players With Unconventional Strategies. Kotaku [online]. Accessed 3/13/17.