A “dark side” logic dominates the discourse on design-related injustice in digital identity systems. Here I dispute this view, noting how it is the very substance of biometric social protection, rather than a peripheral side, to be harmful for its users.
When Soumyo and I first introduced the concept of design-related injustice, in relation to the use of Aadhaar within the Public Distribution System (PDS) in Karnataka, we had in mind a more circumscribed notion than what came next. We had theorised design-related injustice as the injustice resulting from misalignment of technology with user needs. Our theorisation came from witnessing entire families queuing up in the ration shops, in the hope that at least one household member would be able to authenticate through the biometric recognition of the Aadhaar system. While combating erroneous inclusion, i.e. the provision of rations to non-entitled users, the system did nothing against erroneous exclusion, i.e. the exclusion of genuinely entitled beneficiaries, a problem that the introduction of Aadhaar in the PDS actually magnified.
The families queuing together at the ration shops, as well as the frustration of those users for whom authentication did not work, resulted in our first idea of design-related injustice. Its basis was the concept of design-reality gaps defined, with Richard Heeks, as gaps between users’ reality and the world of technology designers, whose assumptions may be very different from the reality lived by users. A common design-reality gap emerges when the two worlds are markedly distant from each other: that is the case for the Aadhaar-based PDS, whose designers make the technology capable, at least on paper, to fight erroneous inclusions. But they do not fulfil the main need of users, that of combating exclusion errors that leave people in hunger, hence generating the misalignment that our original notion of design-related injustice spoke about.

Queuing outside the ration shop. Bengaluru, Karnataka, April 2018
The concept, however, evolved significantly over time. Sasha Costanza-Chock’s amazing book “Design Justice: Community-Led Practices to Build the Worlds We Need” has marked a cornerstone in that evolution, theorising how injustice can be directly embedded into technology design. Opened by the example of airport scanners and the injustice performed through them on transgender bodies, Costanza-Chock’s book has inspired our rethinking of the injustice performed through biometric technologies on people accessing food rations. Such a rethinking is along the lines of a more thorough notion of design injustice: one of which misalignment with user needs, initially central for us, is a component of a wider ensemble. An ensemble in which injustice is the substance, rather than just a “dark side”, of systems that regulate access to social protection for millions of people globally.
At least two considerations inspire this thought. First, technologies like the Aadhaar-based PDS embody a nexus – referred to as the authentication-authorisation nexus – which subordinates authorisation to essential services to the correct authentication of users. This is made to combat the leakage rates that diminish the effectiveness of large anti-poverty programmes, including India’s PDS. Biometric authentication of users is made to ensure they are genuinely entitled: but at the same time, it does nothing to combat the harm suffered by those for whom authentication doesn’t work. My older Karnataka fieldwork revealed that authentication failure is not only due to the misreading of bodies, but also to hidden issues – such as failed connectivity of point-of-sale-machines to the central ID database – which result into the outright injustice of denial of food rations to users.
Secondly, design-related injustice is reproduced across systems. Before switching to Aadhaar-based authentication, the Karnataka PDS adopted an independent system – based of weighing scales, in turn connected to biometric point of sale machines – that presented similar issues to the Aadhaar-based PDS. In the older system, machines would announce the food quantity weighed through a speaker: that speaker however was often muted, as we found in our work across ration shops in 2014-2015. Most importantly, that system also excluded non-recognised users from food provision: but a backup system made it possible to sell rations outside it, based on manual verification by the ration dealer. In the Aadhaar-based system instead, while the injustice of exclusion is reproduced, no backup system is available, which may contextualise the hunger deaths written on by the Hindu for excluded users in the Jharkhand state.
The embeddedness of injustice in the authorisation-authentication nexus, and its reproduction across different versions of biometric authentication systems, leads to a fundamental rethinking of the notion of design-related injustice that we had originally thought. We had theorised a bare misalignment with user needs, thinking the problem was with the gap between the designers’ world and the need of users to access the rations which builds up their livelihood. But field stories show that injustice operates at a much deeper level, which can be seen as a form of injustice that is directly perpetrated through technology design. The evolution of design-related injustice can inspire, we hope, data justice research on digital identity beyond the case of India’s food rationing system.
2 thoughts on “Rethinking Design-Related Injustice”