Skip to content
boy sitting on the ground covering his face.

Dehumanisation – Policy Brief

Why focus on Dehumanisation?

AMAN has made a series of policy proposals to Governments, tech platforms, regulators and fellow civil society.

Defining and identifying hate speech at scale comes with challenges. Our proposals aim to cut through these challenges and focus on some of the most potent vectors of harm.

We are determined to have anti-Muslim hateful stereotypes and narratives acknowledged and addressed by major platforms. We explain how Islam can be used as a proxy to dehumanise and attack Muslims.

Our proposals are designed to be applied across the board to help all humanity and are grounded in human rights.

(1) Dehumanising material is the material produced or published, which an ordinary person would conclude, portrays the class of persons identified on the basis of a protected characteristic (“class of persons”) as not deserving to be treated equally to other humans because they lack qualities intrinsic to humans. Dehumanising material includes portraying the class of persons:

(a) to be, or have the appearance, qualities, or behaviour of

(i) an animal, insect, filth, form of disease or bacteria;

(ii) inanimate or mechanical objects; or

(iii) a supernatural alien or demon.

(b) are polluting, despoiling, or debilitating an ingroup or society as a whole;

(c) have a diminished capacity for human warmth and feeling or to make up their own mind, reason or form their own individual thoughts;

(d) homogeneously pose a powerful threat or menace to an in-group or society, posing overtly or deceptively;

(e) are to be held responsible for and deserving of collective punishment for the specific crimes, or alleged crimes of some of their “members”;

(f) are inherently criminal, dangerous, violent or evil by nature;

(g) do not love or care for their children;

(h) prey upon children, the aged, and the vulnerable;

(i) was subject as a group to past tragedy or persecution that should now be trivialised, ridiculed, glorified or celebrated;

(j) are inherently primitive, coarse, savage, intellectually inferior or incapable of achievement on a par with other humans;

(k) must be categorised and denigrated according to skin colour or concepts of racial purity or blood quantum; or

(l) must be excised or exiled from public space, neighbourhood or nation.

(2) Without limiting how the material in section (1) is presented, forms of presentation may include,

(a) speech or words;

(b) the curation or packaging of information;

(c) images; and

(d) insignia.

Intention component

If the above definition was used as a standalone civil penalty, it should be complemented by an intention component:

in circumstances in which a reasonable person would conclude that the material was intended to portray the class of persons as not deserving to be treated equally to other humans or to incite hatred, serious contempt or severe ridicule toward the class of persons.

Adding an intention element may make enforcement more difficult and may not be necessary, especially if the definition is used as part of a legal framework where there are already intention components or exceptions available.

How did we develop this working definition?

AMAN developed this working definition after spearheading a study of five information operations online (Abdalla, Ally and Jabri-Markwell, 2021). The first iteration of this definition was published in a joint paper with UQ researchers (Risius et al, 2021). It continues to be developed with input received from researchers, lawyers and civil society.

Possible dehumanising conceptions are surfaced through research and then tested against Haslam‘s frame of whether it deprives a group of qualities that are intrinsic to humans.

If a subject is dehumanised as a mechanistic form, they are portrayed as ‘lacking in emotionality, warmth, cognitive openness, individual agency, and, because [human nature] is essentialized, depth.‘ A subject that is dehumanised as animalistic, is portrayed as ‘coarse, uncultured, lacking in self-control, and unintelligent‘ and ‘immoral or amoral’ (258).

Some conceptions are found to fall outside the frame of dehumanisation but could still qualify as vilification or discrimination, for example, using anti-discrimination laws.

The three categories of dehumanising comparisons or metaphors in Clause (a) are drawn from Maynard and Benesch (80), and fleshed out with further examples from tech company policies (refer to Meta for example).

Clause (b) is derived from Maynard and Benesch (80).

Clause (c) is derived from Haslam (258).

Clauses (d) and (e) are elements of dangerous speech that Maynard and Benesch refer to as ‘threat construction’ and ‘guilt attribution’ respectively (81). However, Abdalla, Ally and Jabri-Markwell’s work shows how such conceptions are also dehumanising, as they assume a group operates with a single mindset, lacking independent thought or human depth (using Haslam’s definition), and combine with ideas that Muslims are inherently violent, barbaric, savage, or plan to infiltrate, flood, reproduce and replace (like disease, vermin)(15). The same study found that the melding and flattening of Muslim identities behind a threat narrative through headlines over time was a dehumanisation technique (17). Demographic invasion theory-based memes (9) or headlines that provided ‘proof’ for such theory (20) elicited explicit dehumanising speech from audiences.

Maynard and Benesch write, ‘Like guilt attribution and threat construction, dehumanization moves out-group members into a social category in which conventional moral restraints on how people can be treated do not seem to apply’ (80).

Clauses (f), (h), (i) are drawn from the ‘‘Hallmarks of Hate’, which were endorsed by the Supreme Court of Canada in Saskatchewan (Human Rights Commission) v. Whatcott 2013 SCC 11, [2013] 1 S.C.R. 467. These Hallmarks of Hate were developed after reviewing a series of successful judgements involving incitement of hatred to a range of protected groups. These clauses were tested using Haslam’s definitional frame for the denial of intrinsic human qualities.

Clauses (f) (‘criminal’) and (g) are drawn from harmful characterisations cited in the Uluru Statement of the Heart.

Clauses (j) and (k) are drawn from AMAN’s observations of online information operations generating disgust toward First Nations Peoples. Disgust is a common effect of dehumanising discourse. These clauses were tested using Haslam’s definitional frame for the denial of intrinsic human qualities.

Clause (l) was drawn from Nicole Asquith’s Verbal and Textual Hostility Framework. (Asquith, N. L. (2013). The role of verbal-textual hostility in hate crime regulation (2003, 2007). Violent Crime Directorate, London Metropolitan Police Service.) The data and process used to formulate this Framework is exceptional. Reassuringly, this research had surfaced examples that were already captured by this Working Definition of Dehumanising Material.

This working definition is a work in progress. AMAN welcomes feedback as it continues to be developed.

Updated 15 July 2023

fist with black background

What actions can we take?

What are the different levers that government could use?

Civil penalties, using a notice and action model, where

  1. an actor or platform carries out dehumanising speech or discourse,
  2. an actor or platform repeatedly incite hatred, severe ridicule or serious contempt of a protected group in an audience

Anti-dehumanisation standards as part of an Industry Standard drafted by regulators with community and expert input. This would help drive more contextualised and competent assessments by platforms and improve their performance and safety by design.

Both would be administered by e-Safety (in relation to social media) and the Australian Communications and Media Authority (in relation to traditional media).

Community complaint mechanisms are not sufficient and place a burden on the community. Still, access to justice can be strengthened by clarifying that Australia’s discrimination and vilification laws apply to social media companies based overseas.

Government must also consider transparency measures.

Our proposals contend with the following policy challenges:

  1. The burden of policing this public harm should not sit on affected communities through complaint mechanisms. Regulators must play their role to ensure platforms take responsibility. This public harm is too big and constantly evolving to expect each community to contend with it.   
  2. The policy response to dehumanising language and discourse should work more upstream to deamplify and fine the actors doing more harm and the platforms that enable them rather than expanding on criminal and carceral approaches, which can be counterproductive and harmful.
  3. Platforms are geared to assess one piece of material at a time rather than patterns of behaviour over time by a bad-faith actor. Many actors repackage news and use humour and memes over time to convey dehumanising discourse about protected groups. Hateful disinformation is not captured by platforms’ hate speech policies.
  4. Disinformation is hard to define without linking to harm. Safeguarding freedom of expression means that any limits imposed at scale must be well defined to avoid the risk of false negatives and positives.
  5. Governments and companies rely on terrorist designation lists to identify ‘violent extremist’ content, which is not fit for purpose.
  6. The reality is that most socialisation towards violence occurs without explicitly inciting violence. So simply banning incitement to violence will have negligible effect.
  7. Standards must be connected to protected groups in discrimination law. Hateful or extreme speech or discourse concerning governments, militaries, law enforcement and other government institutions does not constitute advocacy of hatred per the law.
  8. Hate speech definitions in Australian law rely on tests that look at the effect of speech rather than defining what it looks like, which makes it difficult and highly resource-intensive for regulators such as e-Safety and ACMA to use. Platforms, on the other hand, have the capability and resources.
  9. Standards need to be resilient to changes in targeted communities and discourse.
  10. Communities have different perceptions of what dehumanisation and hate speech look like to them. Contextual information is important to interpreting the standards, but the standards need to be universal for clarity and certainty.
Muslim Woman looking in the distance

Evidence to support this action