As advertisers, influencers, and journalists feverishly maneuver for eyeballs and likes, employing an arsenal of tactics to boost their views, social media giants have taken to criticizing them for gaming the system.

While it looks like moralizing on the part of Facebook and Google, such shaming hides a wider effort: the big online players are actually using such criticism — such exercising of moral authority — to broaden their online control, according to a recent study by U.S. researchers. 

The study finds Google, Facebook, and by extension, YouTube and Instagram, engage in “platform paternalism,” a Big Brother-sounding term that suggests the algorithm gamers are morally bankrupt, and paints the tech firms as defenders of integrity and authenticity. 

“Gaming accusations constitute an important mechanism through which platforms legitimate their power and authority, to the detriment of less well-established cultural producers,” the researchers wrote. 

For example, Instagram’s Community Guidelines encourage users to “help us stay spam-free by not artificially collecting likes, followers, or shares,” the researchers wrote, while Facebook encourages users to fight so-called engagement bait. This permits them to serve as judge and jury in a system of their creation, and enables them to police the boundaries as they see fit.

Caitlin Petre, a researcher at Rutgers University and her co-authors Brooke Duffy and Emily Hund, look at several cases including Google’s use of the words “organic” and “authentic” to describe good behavior and in turn “punish,” “cheating,” “schemers” and “offenders.” Journalists are quick to emulate this moral posturing even if the terms are ill-defined.

“Gaming accusations constitute an important mechanism through which platforms legitimate their power and authority, to the detriment of less well-established cultural producers.”

When there is talk of “gaming the algorithm,” the worst offenders come to mind such as 8chan and other sites that band together to make hate-content trend. To pull off these actions off is not much different than pulling together people and using carefully chosen keywords to draw attention, much like a political candidate would do.

Other conclusions:

  • Social media denigrates the use of bots as foul play but use bots to reach conclusions on who might be abusing the system. 
  • “The line between what platforms deem illegitimate algorithmic manipulation and legitimate strategy is nebulous and largely reflective of their material interests,” the researchers said. 
  • The thin line between what is acceptable and unacceptable behavior on these platforms is “fraught, continually shifting and arbitrary,” the study concludes, but social media has that power.
  • The playbooks these platforms have set up, and the moral language they use to make them seem neutral, may also be a way to cast unpaid promotion as underhanded and a cunning way to avoid pay-to-play accusations in “a field that already lends itself to winner-take-all dynamics,” the authors wrote.