Content moderation and censorship: can we handle a double standard?

On 25 April 2019, Vice Motherboard journalists Joseph Cox and Jason Koebler reported that during a recent Twitter company meeting a comment was made that:

Twitter hasn’t taken the same aggressive approach to white supremacist content [as it has to ISIS] because the collateral accounts that are impacted can, in some instances, be Republican politicians. The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material.

Indeed, it is well-known that most machine-learning algorithms in use in content moderation lead to a significant number of false positives (i.e. extremist content which turns out not to be). Even those that are sophisticated enough to correctly identify a vast majority of material correctly are likely to have these ‘false flags’, which when scaled up to millions of posts, inevitably causes non-extremist material to be flagged. Indeed, many archivists of human rights abuses of the Syrian Civil War, as an issue that Dima Saber is currently researching, have had their content removed from YouTube for this reason: their content was flagged incorrectly as extremist.

The use of machine-learning tools to enhance content moderation is inevitable, given the scale of the content that needs to be moderated and profit motives guiding social media companies. False positives, then, will also be inevitable.

When is extremism extremism?

What Cox and Koebler’s piece points out is that we are more willing to accept these false positives when it comes to the consensus against ISIS content. This is inevitably more complex when the current administration in the US, Republican politicians, and right-wing social movements ­– who have much more capacity to pressurize social media platforms than those who are swept up in content moderation as ISIS false positives – have constantly accused social media platforms of censoring legitimate conservative voices. Thus, a false positive that sweeps up someone allied with the broader right-wing ecosystem online can result in significant backlash against these companies. We can therefore hypothesize that political pressure has an effect on how and when these machine-learning tools are deployed.

It becomes imperative that we think therefore about the power relations that are at play in the use of machine-learning and the risks of false positives. Antecedent claims about conservative censorship have created a situation in which deployment of technology to counter white extremist, white nationalist, or white supremacist content cannot be treated in the same way as Islamic extremist content.

This has led to two outcomes that ought to be considered. First, the antecedent claim of conservative censorship provides a shield for white extremists to continue to operate on platforms like Twitter without the same scrutiny that is applied to other extremist groups. By accusing bias before takedowns happen, conservative discourse has ensured that takedowns are framed as somehow inappropriate – a “liberal elite” bias against conservatives as well as a censure of “alternative” viewpoints.

Prominent conservatives like Ted Cruz and Nigel Farage use their positions of power to accuse technology companies of bias against them. In doing so, they are instrumental in producing a framework that shields right-wing extremist ideas from the full force of automated content moderation – and thus a larger fight against extremism on the internet.

Double standards

This then leads to a reticence on the part of social media companies to counter white extremism in the same way, probably due to political risk. If Republican politicians will face account suspension alongside white extremists, it is likely that the claim of “conservative censorship” will be made more credible, and that these companies will face increased scrutiny from allied political groups and social movements.

Click Here:

The claims made by conservatives about “free speech” and their “censorship” at the hands of the technology industry has become the raison d’être for rapid changes in the networked ecosystem of the radical right online. They are increasingly moving to encrypted messaging (like Telegram) and blogs that are not regulated by content moderators, or alternative-tech social media sites (like Gab.ai).
More insidious is that the claims about “censorship” have been particularly effective in ensuring that a double standard exists in practice. ISIS is meant to be banned, and those innocent individuals caught up in their extremism as collateral damage have significantly less capacity to seek restitution. There is a consensus that this is an acceptable price to pay for limiting ISIS’s capacity to use mainstream platforms.

White extremism, on the other hand, receives the benefit of less aggressive algorithmic policing. Given the comments raised in Cox and Koebler’s investigation, it is clear that – in anticipation of claims that Twitter is biased against conservative voices – this has an effect on its decision making to counter white extremism using algorithmic tools.

Decentering machine-learning

I believe there may be a way out of this dilemma, and it involves decentering machine-learning. First, it is important for tech companies to focus its approach not on content but on people – who have been clearly identified as white nationalists by various groups – and disrupting the networks they belong to.

Rather than searching for content, tech companies should be searching for the key nodes that spread white extremist content on social networks. This would not require the same types of machine-learning tools, but rather, attention to and monitoring of identifiable hate networks. This could help to ensure fewer false positives, but could be scaled up in semi-automated ways to identify key communication pathways in white extremist networks – thus disrupting them.

Furthermore, machine-learning could also be useful in identifying hateful networks and building a complete picture of white extremist ecosystems as they emerge and sustain themselves on specific platforms. It is a useful tool in profiling networks of users, and determining what action is legitimate in order to disrupt and destabilise those networks.

Thus, user suspension and content takedown might be focused on network disruption, and machine-learning tools can be used to provide justification of why a user ought to be taken down as well as to identify the size and membership of these networks.

This would not assuage the bad-faith claims of “censorship” that shield white extremism, but it would go a long way towards eliminating false positives from content-focused machine-learning tools, re-purposing them towards building a picture of toxic networks and developing processes to destabilise and disrupt them.

Visit the Centre for Analysis of the Radical Right (#CARR)

Read more: Countering the radical right

work_outlinePosted in News

Leave a Reply