My book is finally submitted! I am so happy that I barely find the words. It was written in tea stalls, in bars, on buses, on planes, even in a police station. It was mostly written on copybooks, in the materiality of pen and paper where, far away from computing, I find my true writing self. It was submitted on a beautiful Norwegian afternoon, but in its pages are stories of digital ID injustice and resistance from my early days of human rights work in Palestine, all through my work on India’s Public Distribution System, up to narratives on digital ID research from Kenya to Jordan, from Colombia’s algorithmic welfare to Uganda’s introduction of digital ID in social protection. The chapters of my book Unfair ID are a hymn to hope, the hope that, by better understanding injustice in digital ID systems, we can be equipped to build systems that restore the fairness lost by undue exclusions, surveillance and denial of basic entitlements.
The book does not have an ending, at least not one in the traditional sense of the term. That frustrated me a bit when I realised it, but after a bit I thought – maybe unconsciously, it was always planned to have an open ending. My last chapter, titled “Imagining Fair ID”, seeks to build on awareness of injustice to imagine fairer forms of digital ID. It is the longest chapter in the whole book. For a text that promises to build on unfairness to imagine forms of ID that liberate instead of surveilling, that include instead of excluding, this is a greatly encouraging result. The chapter puts the digital ID discourse in relation with that on digital rights, and I am delighted that the splendid report by Giulio Coppi – Mapping humanitarian tech: Exposing protection gaps in digital transformation programmes – was released in time for the book to engage it. With over 400 documents reviewed, 40 interviews and many months of work, the report offers a groundbreaking illumination of the role of private companies in digital humanitarianism, offering up-to-date statistics and highlighting the explicit inscription of digital ID work in the domain of digital rights.

Ration Shop, Karnataka, April 2018
One of the topics reviewed in my final chapters is that of algorithmic welfare systems. A brilliant article published by Tapasya, Kumar Sambhav and Divij Joshi in January 2024 brought the topic back to my attention: I say “back”, because Bidisha Chaudhuri’s work on programmed welfare already illuminated the algorithmic aspects that the Aadhaar biometric database allowed infusing in India’s Public Distribution System (PDS). Tapasya et al., however, bring the argument forward by studying Samagra Vedika, an algorithmic system that, implemented in India’s state of Telangana, crosses different databases to determine the list of people eligible for foodgrains and other goods under the PDS. With inbuilt assumptions of algorithmic fairness, in this case meant as fairness towards users of a programme on which millions of households depend for sustenance, the algorithmic system flags what it determines as non-eligible beneficiaries, who due to asset-owning, incomes or other factors are barred from receiving essentials provisions under the PDS.
But the algorithm seems to replicate the flaws that research on the Aadhaar-based PDS, and on the shift to a targeted PDS before it, already denounced. The story of elderly widow Bismillah Bee, told in the opening of the article, is illustrative of this matter: her deceased husband, a rickshaw driver, was erroneously tagged as a car owner, resulting in the family losing PDS entitlement. Research on the case revealed how the man had been mistaken for a car owner with a name similar to his: authorities chose, however, to follow the algorithm’s flawed decision, even when faced with the wrongful input mechanism. The case, note the authors, is epitomic of many more: from 2014 to 2019, the state of Telangana cancelled over 1.86 million existing ration cards and rejected 142,086 new applications without notice. While the technology was claimed to be of high precision, a Supreme Court-imposed re-verification of cancelled cards in April 2022 suggested that at least 7.5 percent of the cards were wrongfully rejected, meaning an equal number of households becoming unable to access basic food provisions.
The argument of continuity, according to which the algorithm barely replicates the exclusions of needy users generated in the PDS for many years, has some foundation. The shift to a targeted system in 1997 meant that a capped number of entitled households had to be established for each Indian state, determining, in turn, the exclusion of many genuinely entitled households from a system of social protection. A logic prioritising the fight to wrongful inclusions has later pervaded the introduction of the Aadhaar-based PDS: algorithmic systems like Samagra Vedika, it can be argued, are designed to replicate a process that excludes those deemed as non-entitled, without necessarily offering opportunities of redressal. Bee’s case is, again, epitomic of a logic that bars the “flagged” person from any provision, without demonstrating the same anxiety to rectify wrongful exclusions. While the “programmatic welfare” theorised by Chaudhuri is indeed qualitatively different, it can be said to reinstate the same exclusionary logic of previous systems.
And at the same time, there is more to the narrative than replication. While I was finishing my book, the book “Algorithms of Resistance: The Everyday Fight against Platform Power”, by the brilliant Tiziano Bonini and Emiliano Trere’, was published: the book unpacks exactly how algorithmic violence can be turned upside down, by communities building resistance through the same algorithmic tools that led to injustice. That is true for algorithmic welfare as well: the digitally-infused nature of the system means that actors willing to instill fair values, such as the data-for-dignity proposition theorised by Joan Lopez for Colombia’s prosperity scoring system Sisbén, have the scope and material ability to do so. This turns the discourse on algorithmic welfare from one of injustice into one of hope: it is the hope that, by recognising algorithmic leveraging in social welfare systems, exclusions can be combated rather than enabled, and fairness can be designed directly in the programmes rather than bypassed. It is the same hope that, referred to digital identity, has informed my book Unfair ID.
This book will be published in a time of tension between injustice and resistance, and stories of algorithmic welfare have so much to add to it. I cannot imagine a better site to imagine new routes to rebuilding fairness!