A substantial part of the discourse on injustice induced through digital ID is about exclusion of genuinely entitled users from basic-need services. While exclusion is important in the digital ID narrative, other, more silent form of injustice also need to be seen and studied.
I am on the train to the Data Justice Conference 2023, taking place at the Data Justice Lab in Cardiff, and I simply cannot wait for two splendid days with the Data Justice research community. My last time attending in person was in 2018, and from then on, the community has grown in spectacular ways. The programme of the next two days shows an epistemic growth featuring talks on civic participation, data solidarities, and increasingly – more so than in the past – engagements with low-income contexts where injustice is corroborated by structural vulnerability. I can’t wait to discover how much massive learning can take place in just two days of work!
My train trip – on the beautiful Welsh coastline – is also an occasion to reflect on the state of the field, or at least, on the digital ID-centered section of the field to which my work contributes. As academic engagements with data justice have changed, and ID research has arguably become increasingly prominent in them, attention towards data-induced harm as produced by digital identification and authentication has grown. Paradigmatic this is the dissection of harms induced by digital identity schemes detailed in the report “Paving a digital road to hell: A primer on the role of the World Bank and global networks in promoting digital ID”, published in 2022 by Katelyn Cioffi, Victoria Adelmant and Christian van Veen’s team at the Center of Human Rights and Global Justice at New York University. The title of Cioffi’s presentation in the upcoming Data Justice conference builds on the reports’ findings to ask a direct question: can collective rights and remedies lead to more effective recourse for digital ID-enabled harms?
As I prepare for massive learning over the next two days, two considerations come to my mind. The first is that data justice research – charged by the usual suspects as being “destructive” of tech-for-good ideas, rather than “constructive” of new ones – is increasingly proving to be the other way round, and to combine a focus on tackling the root causes of injustice with one that actively seeks routes for overcoming them. Cioffi’s quest for “collective rights and remedies” as a repair to harm is a powerful instantiation of this. More instances can be drawn in the recent Data Justice book written by the Conference’s organisers, whose chapter 8 pointedly looks at data and collective action, and in a programme that looks forward to new solutions and fields rather than backward. Having my amazing students Alina Krogstad, Guro Handeland and Johannes Skjeie coming in to present work on printer-enabled scannable codes in Malawi’s rural community clinics only reminds me of the importance of building solutions to combat unfair ID.
A second consideration is less popular. It is on the extent to which user exclusions, sociotechnically enabled by ad hoc artefacts, make up the bulk of harms induced by digital ID.
On the one hand, exclusions are a fundamental part of the problem. Quantitative works – estimating the size of beneficiary populations that, while entitled to a certain scheme, cannot access it due to its biometric turn – offer precise estimates of how exclusion plays out. Instances regarding my main research object, the incorporation of India’s Aadhaar in India’s Public Distribution System (PDS), reveal significant parts of beneficiary population being biometrically barred from services they could previously access. As a recent award-winning paper by Pragyan Thapa, Devinder Thapa and Øystein Sæbø reminds us, quantification is a powerful epistemic device in ICT4D, and one that speaks loudly about the meaning of exclusions for users that biometric infrastructure has barred from access to programmes direly needed for household sustenance.
On the other hand, exclusions do not act alone. It is the hidden layers of injustice that I believe need an eye on, and that, I am convinced, this conference will be an important occasion to explore.

Street market workday, Kolkata, September 2015
One such type is informational injustice. This starts from the very fact that users may not, at all, be in the position to question how their data are treated once captured for access to vital programmes, such as humanitarian assistance or social protection schemes. When discussing India’s Aadhaar’s data handling with beneficiaries of the national Public Distribution System back in 2018, Soumyo Das and I most often received blank looks: in the state of Karnataka where we did our work, Aadhaar’s biometric identification was mandatory for accessing essential food commodities, so which questions could users have at all asked? It is the inability to ask questions that paves the way for further injustice, such as the finalistic application of biometric technologies for transition to a cash transfer architecture that encounters vast suspicion from users. Structural inability to ask questions, in turn built up by putting users in the either-or position of being biometrically profiled or not subjects of aid, creates the condition for an informational injustice that is more silent and sneaky than the blatant effect of exclusions.
Design injustice, which I first encountered by meeting Sasha Costanza-Chock in this same conference in 2018, can be equally hidden through artefactual layers. This is a conference of technologies that look largely user-friendly: computerised databases meant for the “social inclusion” of forcedly displaced persons in host societies. Digital wallets meant for “empowerment” through “financial inclusion” (if you are here, make sure you attend Margie Cheesman’s presentation on the lived realities of refugees subjected to digital wallet-based payments for cash-for-work programmes). Digitised workplaces. All technologies designed in the best of the tech-for-good spirits, only to see a reversed narrative once the user – in a deconstructed-historiography fashion – is given voice, to tell the real story behind the fashionably-tech architecture. This is where we hear the desperation of social protection programme users barred from access to vital food security schemes, the helplessness of workers left without information on which working days their biometrically-authorised payment did actually pay. This is where user-friendly design hides very precise artefact policies, which the next two days will be an important occasion to delve in.
Exclusions are an important side of injustice. I want to dedicate this conference to study what lies behind them.