Joy Buolamwini’s work uncovered racially discriminatory patterns in facial recognition datasets and software systems – only when wearing a white mask would the system recognize her as a human being. Her organization, the Algorithmic Justice League, was established to fight this problem of algorithmic bias, which she terms ‘the coded gaze’. In this blogpost I want to discuss the way in which this injustice constitutes an additional, specific harm that can be considered epistemic in nature, drawing on what Miranda Fricker coined, in her book Epistemic Injustice: Power & the Ethics of Knowing(2007), a hermeneutical injustice:

[T]he injustice of having some significant area of one’s social experience obscured from collective understanding owing to hermeneutical marginalization. (Fricker, 158)

The term ‘coded gaze’ might not have been the only, or even the first, term for the phenomenon it seeks to describe. While these terms may seem like mere terminology, it’s important to realize the underlying concepts and corresponding words, our ‘hermeneutical resources’, are not neutral. They reflect power structures and can therefore also reinforce or disband those structures. And as with all exercise of power, there is potential for either justice or injustice. Hermeneutical injustice, Fricker argues, forms a specific harm (and moral wrong) that deprives disadvantaged parties of the conceptual tools needed to make sense of their experiences, and consequently makes it impossible for them to render these experiences intelligible to others or to argue against any wrongs committed against them as a result. Being deprived of the possibility of ‘making sense of one’s own experiences’ and the consequences tied to that capacity, including the very construction of selfhood, are what constitute the ultimate wrong in hermeneutical injustice:

‘The primary epistemic harm done to [her] was that a patch of her social experience which it was very much in her interest to understand was not collectively understood and so remained barely intelligible, even to her’ (Fricker, 162).

Hermeneutical marginalization

Hermeneutical injustice according to Fricker relies on another key concept: hermeneutical marginalization. This type of marginalization means that members of certain groups can’t contribute equally to the formation of the shared meanings, concepts and interpretative tropes that operate within society. Those hermeneutical resources thus become structurally prejudiced against that subject group. Fricker’s main example of hermeneutical marginalization, leading to hermeneutical injustice, is the sexual harassment of women. As she explains, the hermeneutical marginalization of women had rendered the collective hermeneutical resource structurally prejudiced against them ‘for it will tend to issue interpretations of that group’s social experiences that are biased because insufficiently influenced by the subject group, and therefore unduly influenced by more hermeneutically powerful groups (thus, for instance, sexual harassment as flirting […])’ (Fricker, 155). This concept, and the term ‘sexual harassment’, she holds, were not yet accessible in the 1950s, and accordingly women had great difficulty both in making sense of their own experiences of this kind and in communicating them to others. This denied them the opportunity to contest the wrongs done to them. The women’s movement thus developed a strategy of encouraging individual women to share their personal experiences, which

‘[…] awakened hitherto dormant resources for social meaning that brought clarity, cognitive confidence and increased communicative facility. […] [W]e can say that women were collectively able to overcome extant routine social interpretive habits and arrive at exceptional interpretations of some of their formerly occluded experiences; together they were able to realize resources for meaning that were as yet only implicit in the social interpretive practices of the time.’ (p. 148, Fricker)

In the computational paradigm, who is the new hermeneutically powerful group and who the hermeneutically marginalized? We must protect against citizens being so marginalized at the hands of computational systems (and the programmers that create them and the economic incentives that drive them) that they are prevented from understanding their own experiences, being able to communicate these to others, and contesting possible wrongs contained within them effectively.

Automated Hermeneutical Injustice

Similarly, without concepts like algorithmic discrimination and machine bias, it is much more difficult for those subjected to this type of discrimination to make sense of their (shared) experiences, to render them intelligible to others, and consequently to take action against the wrong they experience. For example, if ProPublica hadn’t uncovered how the algorithms used in the U.S. criminal justice system have a discriminatory effect on African-Americans, mere suspicion on the part of defendants might never have become actual knowledge of what was happening to them due to computation (Angwin et al., 2016). When an injustice is being done to you and you don’t have proper understanding of it, or a conceptual toolbox and corresponding vocabulary to contest it, how can you ensure it be stopped? The victims of algorithmic discrimination in this case therefore suffered not just the primary wrong of discrimination, but also the secondary, subsequent harm of hermeneutical injustice, the latter aggravated and entrenched by automated systems. They were victims of a new kind of wrong – automated hermeneutical injustice.

Where Fricker identifies two levels of injustice, the incidental and the systematic, I suggest that computation is a factor that is distinct from, but which also exacerbates systematic hermeneutical injustice, one that warrants special attention, and particularly in the legal context. Incorporating biases into opaque and barely intelligible computational systems makes it effectively impossible for the layperson and victim to contest any decision flowing from it. The people who fall victim to this type of bias need to be aware of the hermeneutic consequences of this technological force, and the role that it plays in their lives.

Why it matters

It is clear how this epistemic harm can flow from the use of data- or code-driven practices in the context of law. Argumentation, and the possibility of contestation, is what differentiates the Rule of Law from a Rule by Law (or, indeed, Code). Consequently, forces that diminish the possibility of contestation are harmful to our commitment to that ideal. As Fricker points out, loss of epistemic confidence in one’s own ability to make sense of the world (as is likely to be a consequence of hermeneutical injustice) is likely to stop one from cultivating the intellectual courage necessary to contest decisions imposed from without. These epistemic harms combine into a dangerous mixture that, when applied in a legal context, can become, institutionally and individually, an eroding force on the Rule of Law. Simply put, hermeneutics is core to law because it enables argumentation and contestation, and contestation in turn is a necessary condition of the Rule of Law. Therefore, hermeneutical justice proper is a necessary condition for the proper functioning of any legal system under the Rule of Law because it is simply what makes contestation meaningful – you cannot contest what you cannot name.


COHUBICOL’s central goal of developing a new hermeneutics for law and computer science is not to synthesize law’s hermeneutical resources with those of computer science into a kind of interdisciplinary Esperanto. Rather, it seeks to safeguard the democratization of the interpretive tools society relies on to ensure, at least notionally, everyone can partake in the generative process of developing shared meanings and interpretations of our social practices: to ensure that hermeneutical justice is a core feature of the computational law of the future. It is thus crucial that computer scientists, programmers, designers and those who instruct them do not become the sole (hermeneutically powerful) groups with influence over our collective hermeneutical resources. We need to design these computational processes and systems across institutions to reflect hermeneutical justice, and to guarantee the possibility of meaningful contestation.

Clearly, the hermeneutical injustices stemming from the use of code- and data-driven practices are additions to an already-long list of concerns around the use of technology in the legal context. Importantly, however, they form the philosophical backdrop against which all these other issues operate. COHUBICOL seeks to identify what it means to be a human being in the era of computational law, through an exploration of these philosophical foundations as well as the – at times deeply personal – ramifications these technologies may have for the individuals subject to them.