Seedy, compromised and creepy, the surveillance machine of Facebook, now operating under the broader fold of its parent company Meta Platforms, is currently giving out the very signals that it was condemned for doing before: encourage discussions on hating a group and certain figures, while spreading the bad word to everyone else to do so.
The Russian Federation, President Vladimir Putin, and Russians in general emerge as the latest contenders, the comic strip villains who those in the broadly designated “West” can now take issue with. According to a Meta spokesperson, the Russian attack on Ukraine had made the company make temporary “allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to the Russian invaders.’” Cryptically, the same spokesman goes on to say that, “We still won’t allow credible calls for violence against Russian civilians.” Meta gives us no guidelines on what would constitute a “credible call”.
Twitter has also permitted posts openly advocating homicide and assassination. US Senator Lindsey Graham was caught up in the bloodlust of permissiveness, using the platform to ask whether Russia had its own Brutus. “Is there a more successful Colonel Stauffenberg in the Russian military?” The only way to conclude the conflict was “for somebody in Russia to take this guy out.”
The cartoon villainy approach of the Meta group also has precedent. In July 2021, the policy on incitement and hate speech was eased with specific reference to Iran’s Supreme leader Ali Hosseini Khamenei. The firm decided to permit posts featuring “death to Khamenei”, or videos of individuals chanting the phrase for a two-week window. Lorenzo Franceschi-Bicchierai wrote pointedly at the time that this permission was “a bizarre choice that highlight’s Facebook’s power and often confusing content moderation rules.”
The Russia-Ukraine policy is only startling for being an open admission to a practice that Facebook has embraced for years. With the company’s astronomical growth, accusations about how it utilises hate speech and deceptive content have reached a crescendo without deep effect. Mock efforts have been taken to deal with them, never deviating from the firm’s market purpose.
An example of this zig-zag morality meet reputational damage was given in 2018. In August that year, the company employed 60 Burmese-language specialists to review posted and distributed content, with a promise to employ another 40 more by the end of the year. Product manager Sara Su called the violence against the Rohingya in Myanmar “horrific and we have been too slow to prevent misinformation on Facebook.”
A more accurate appraisal of the company’s conduct was revealed by an internal trove of documents showing how harms were closely monitored but algorithmically exacerbated. The documents, disclosed to the US Securities and Exchange Commission by whistleblower Frances Haugen, revealed a number of things, including the gulf between CEO Mark Zuckerberg’s public statements on improvements and the company’s own findings.
In testimony given to Congress in 2020, Zuckerberg claimed that 94 percent of hate speech was removed before a human agent reported it. The picture emerging from the internal documents showed that the company did quite the opposite: less than 5 percent of hate speech on the platform was actually removed.
Haugen summed up the approach in her opening statement to the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security in October last year. Conceding that social networks faced “complex and nuanced” problems in dealing with misinformation, counterespionage and democracy, she was blunt about the “choices being made inside Facebook”. They were “disastrous – for our children, for our public safety, for our privacy and for our democracy – and that is why we must demand Facebook makes changes.”
The platform has also been the target of legal suits for encouraging hate speech. In December, Rohingya refugees, having little time for the firm’s promises to turn a new leaf, instigated legal action in both the United States and the United Kingdom for $150 billion. The San Francisco lawsuit, filed by Edelson and Fields Law on behalf of an anonymous plaintiff, alleges that Facebook’s introduction in the country in 2011 encouraged “the dissemination of hateful messages, disinformation and incitement to violence” which led to genocide of the Rohingya.
The Ukraine War has revealed a familiar pattern. On February 26, 2022 Facebook initially announced that it had “established a special operations center staffed by experts from across the company, including native Russian and Ukraine speakers, who are monitoring the platform around the clock, allowing us to respond to issues in real time.” The company promised that it was “taking extensive steps to fight misinformation and implementing more transparency and restrictions around state-controlled media outlets.”
Then came the easing of policies on hate speech regarding Russian figures, with the predictable and, given the context, understandable reaction. The Russian embassy in Washington called the policy “aggressive and criminal […] leading to incitement and hatred and hostility”. It gave Moscow a good basis to claim that this was yet another feature of an “information war without rules”.
Disinformation experts adopt a bit of hair splitting in approving Meta’s approach. “The policy calls for violence against Russian soldiers,” insists the Atlantic Council’s Digital Forensic Research Lab’s Emerson Brooking. “A call for violence here, by the way, is also a call for resistance because Ukrainians resist a violent invasion.”
This policy of intervening on the side of the Ukrainian cause to Russia’s detriment is encouraged by Meta’s President of Global Affairs, Nick Clegg. In his March 11 statement, Clegg makes the case for selective violence even more pronounced. “I want to be crystal clear: our policies are focused on protecting people’s rights to speech as an expression of self-defense in reaction to a military invasion of their country.” Had standard content policies been followed, content “from ordinary Ukrainians expressing their resistance and fury at the invading military forces would have been removed.”
This immoderate stance does not have universal agreement. Media sociologist Jeremy Littau has made the pertinent observation that, “Facebook has rules, until it doesn’t.” It claims to be merely a platform above taking sides, “until it does.” To not permit hate speech except in designated cases against certain people of a certain country was “one hell of a can of worms.”
Meta’s latest move is disturbingly refreshing in calling out a policy that remains haphazard, selectively applied, but always driven by the firm’s own amoral calculus. The Ukraine conflict now gives the group a cover for practices that enfeeble and corrupt democracy while picking sides in war. The company is clearly not above encouraging posts advocating homicide and murder after testing the wind’s direction. With Russia being rapidly cancelled culturally, politically and economically throughout the fold of Western countries, Zuckerberg is bound to think he is onto a winner. At the very least, he has found a distracting alibi.