Discovering Unfairness: Using the Access Now #DigitalID toolkit in class today!

Today, my students in the course of ICTs and Global Inequalities at the University of Oslo found a different scenario from the usual: four card stacks names Persona, System, Harm and Mitigation, ready to turn the class into a “choose your own adventure” game aimed at discovering the harmful effects of Digital ID systems, and their mitigation. Here is a short story of how it went!

I love gamified learning. I am especially keen for the interactive experience it affords, the learning-by-doing practice, the application of theoretical concepts to reality. When I interviewed the one and only Marianne Diaz Hernandez for my book Unfair ID, among many, extremely insightful notes, she shared with me the construction of the Digital ID Toolkit, a game constructed with the #WhyID initiative of Access Now for players to experience real-life scenaria of unfairness in digital ID. As someone who likes her games, but was not raised with gamified learning, I embraced the gamification of my course at University of Oslo as a route to conveying difficult concepts through relatable situations, and I am so happy to be able to report on this experience today.

What is the Digital ID Toolkit?

The Digital ID toolkit is a “choose your own adventure game” consisting of four stages: Persona, System, Harm and Mitigation. At the beginning of the game, each player (or team) chooses a Persona card, which describes a situation (fictitional persona; real-life scenario) of predicament in relation to digital ID. Let me share one of the persona scenaria my students dealt with today:

Anya is a trans nonbinary citizen of Iceland with a nonbinary gender marker on their passport. Anya needs to move to Italy for work, and Italy requires foreigners to carry identification at all times. The Italian ID card is intended for both digital and physical use, with biometric data printed on the card and stored on a contactless chip. The ID includes full name, place and date of birth, the holder’s picture, and a fingerprint from each hand, and only allows for male or female gender markers. The card is not mandatory for Italian citizens but is the ID form most widely accepted in both the public and private sectors. The residency permit available to eligible foreigners is also digital and does not allow for nonbinary markers. If public security officers ask a person to identify themselves and are not satisfied by the answer, they may hold the person in custody until their identity is ascertained. The discrepancy between gender markers on Anya’s Icelandic and Italian IDs as well as the forced adoption of a gender marker with which Anya does not identify both increase Anya’s risk of being subjected to further investigation.

What is gonna happen to Anya? The students will discover through a choose-your-own-adventure journey, which consists of three building blocks:

– System – the students draw a system card, which lists details of the digital identity system that, for instance, Anya is experiencing. Such systems – including biometrics, centralised systems, databases of identity – bear features that crystallise the adverse outcomes that Anya is experiencing: for instance, the absence of their identity in the drop-down menu they encounter, and the hard consequences of not identifying with one of the options available.

– Harm – based on the type of system they selected, the students draw a harm card, which details the negative consequences of being implied in the same system. For instance, Anya’s non-recognised identity leads to very material risks of not being able to apply for a job in her host country, of not being visible to the state, but also, on the personal and emotional level, of being subjected to dehumanisation, reducing her identity to a bundle of data that cannot communicate the predicament she is in. At this point, it is mitigation that needs to be looked at.

– Mitigation – as a final step, the students draw a mitigation card (and we didn’t get there because we spent long time on the system-harm link!), where they can explore different routes to mitigation of the harm suffered by the person. What routes to mitigation does Anya have? Advocacy of non-binary, queer rights; unionisation; data minimisation (as in, systems that capture only the essential data points of the human being, without interceding more than those). The final step is important because it conveys a central message: if we want to understand the ways to tackle digital ID harm, we need first of all to see how that harm happens, and then link the properties of ID platforms to specific degenerative outcomes, to imagine and craft mitigation.

Today’s class was a generative experience of ID tales and memories. I teach a class of internationals (being myself an international resident of Norway) and the way the stories of the ID toolkit overlapped with the students’, the appraisal of the harmful mechanisms, the creative imagination of ways out of ID oppression, have been a great lesson for me, and I am so happy that the #WhyID initiative made this possible.

Now, on to write my lecture on biometric borders based on today’s insights. It’s 21.08 here in Oslo and I never felt so motivated. Never stop learning!

Information-Erasing Artefacts

ID artefacts are bearers of information. On the subject’s identity, but also on many other aspects concerning them: their entitlements, their household size, the markers to which certain entitlements are connected. As such, ID artefacts play a significantly greater role than simply affording authentication at the point of access of a given service: they tell significant parts of a person’s story, materialising or denying the opportunity to access particular entitlements. The information-bearing role of such artefacts is, to say it with the latest, great book by Mareile Kaufmann, a route to “Making Information Matter”: information on users, but also on what their demographics, citizenship status, family and socioeconomic classification mean to accessing key means to livelihood.

Work on India’s ration cards, in primis the great book “In Pursuit of Proof: A History of Identification Documents in India” by Tarangini Sriraman, has expanded on the materiality of identification documents and its consequences. Ration cards are the central document for accessing subsidised items under India’s Public Distribution System (PDS), the country’s food security system which provides subsidised foodgrains and other commodities to entitled households. Crucial to the ration card is its nature as a household-based document: moved from an individually issued form in July 1952, the card is, Sriraman notes, a reflection of the use of «family as a category of governmentality» as made by the Indian government. It is in virtue of household membership, rather than their own individuality, that people are subjects of rights in the PDS: and at the same time, it is in the materiality of the ration card that key information is contained and enacted. Pre-biometric, older forms of ration cards consisted of a paper booklet with numbered spaces, where each month, at the time of ration delivery, the ration dealer would put a physical sign or stamp indicating that the ration had been collected. This provided central information to (a) the ration dealers themselves, and (b) the users, for whom an empty space on the card constituted the material proof that a ration had not been delivered.

Paper-based receipt, Karnataka, April 2018

The transition to digital ID artefacts, moving core aspects of identification, authentication and authorisation to the digital world, may leverage digitality to strengthen the core aspects of accountability and transparency promised in the transmission of key information. At the same time, the question on which information is preserved – and which is lost – in the transition from a physical to a digital ID artefact remains open. A core example is offered by the work of Grace Carswell and Geert De Neve, who investigated the transition of India’s state of Tamil Nadu from physical ration cards to smart cards, carrying a scannable QR code to be used by people for authentication in ration shops. The smart card, note Carswell and De Neve, promises easy and secure identifiability of individuals at the point of sale: a QR code affords retrieving the person’s name, family size, and all details connected to determination of entitlement for that month. At the same time, the card promises delinking of the act of identification from the person’s fingerprint, which constitutes advantages especially for elderly people whose mobility – and ability to physically visit the ration shop – is limited. All these notes make the smart card an artefact that promises informational justice, defined as the ability of an ID artefact to reflect and enact correct information on subjects.

But the parallel story, concerning information that is lost in the transition to digital ID artefacts, may be as important as that on information that is gained. Work from Carswell and De Neve illustrates also this point. As their research on the Tamil Nadu PDS reveals, the transition to a smart card eliminated the material component of the acknowledgement of ration delivery: before it was a stamp on the ration card booklet, physically provided by the ration dealer, that provided proof of delivery, and its absence could be used as a means to prove that ration collection had not happened yet. In other words: an absent collection stamp was a tool of bargaining power. With the transition to smart cards, however, proof of delivery was delegated to a text message on the registered person’s phone: this posed problems of a gendered nature, due to ration cards often being registered in the name of a male member of the family rather than that of women. It also posed problems of accessibility, due to text messages being sent in English, and many PDS users in the state not speaking this language. But at the heart of the problem, is a transition where the bargaining power of an empty space on a ration card – physically showing that collection has not happened – being lost: users cannot show the “lack of reception” of a text  message, which leaves them with no option to prove that a month’s ration delivery has not happened. This information is lost in the digital artefact, while the physical artefact, the paper booklet, was explicitly designed to carry it.

In my forthcoming book Unfair ID, I propose the notion of information-erasing artefacts to conceptualise the loss of information that transition from physical to digital forms of ID can involve. The transition to smart cards in the PDS is a poignant example: another one is in the work of Margie Cheesman, who studied a blockchain-based digital wallet offered to women refugees in a cash-for-work programme in Jordan. Women refugees, Cheesman notes, used to receive an envelope containing the cash corresponding to worked days and hours: his made it possible for them to be sure of the exact correspondence of salaries to the worked time, as well as ensuring actual usability of the cash. But the transition to the blockchain-powered, biometrically enabled EyePay machines introduced informational hurdles that were not present before: recognition of the user through iris-based matching produced a paper receipt, reporting only some of the key information needed. The receipts, refugees reported to Cheesman, did not report what each payment was for, and neither how many and which working days the payment corresponded to: they only enabled the user to collect money from a cash counter, at the same time not knowing which days’ salary payment was being collected. The issue, Cheesman continues, transposes to the material usability of cash, which stands in stark contract with the “balance” held in a digital wallet: paper receipts, she notes, were held securely to preserve their materiality, with women folding receipts into their bras during the working day to prevent the ink from rubbing off.

The transition to smart ration cards in a large food security programme, as well as that to a blockchain-powered digital wallet in a cash-for-work scheme, induce thinking on a type of information loss that a transition to digitality – with its aims of transparency and accountability – does not necessarily contemplate. Against this backdrop, I find the idea of information-erasing artefacts a powerful one to conceptualise this type of informational loss. It is not, I argue in the book, only unintended loss: it is meaningful erasure, a point that the information-deficient receipts described in Cheesman’s work bring to material visibility. The informational advantages of digitality need attention, as it is proven for instance by electronic weighing machines in India’s PDS: electronic weighing of goods visualises quantities sold, and reduces the scope for manipulation at the last mile of large food security programmes. But to be complete, the story of informational justice in digital ID needs theoretical tools to conceptualise erasure, and that is where the idea of information-erasing artefacts can be of some use. More work is needed on the informational consequences of such a game-changing transition.

Unfair ID and Algorithmic Welfare

My book is finally submitted! I am so happy that I barely find the words. It was written in tea stalls, in bars, on buses, on planes, even in a police station. It was mostly written on copybooks, in the materiality of pen and paper where, far away from computing, I find my true writing self. It was submitted on a beautiful Norwegian afternoon, but in its pages are stories of digital ID injustice and resistance from my early days of human rights work in Palestine, all through my work on India’s Public Distribution System, up to narratives on digital ID research from Kenya to Jordan, from Colombia’s algorithmic welfare to Uganda’s introduction of digital ID in social protection. The chapters of my book Unfair ID are a hymn to hope, the hope that, by better understanding injustice in digital ID systems, we can be equipped to build systems that restore the fairness lost by undue exclusions, surveillance and denial of basic entitlements.

The book does not have an ending, at least not one in the traditional sense of the term. That frustrated me a bit when I realised it, but after a bit I thought – maybe unconsciously, it was always planned to have an open ending. My last chapter, titled “Imagining Fair ID”, seeks to build on awareness of injustice to imagine fairer forms of digital ID. It is the longest chapter in the whole book. For a text that promises to build on unfairness to imagine forms of ID that liberate instead of surveilling, that include instead of excluding, this is a greatly encouraging result. The chapter puts the digital ID discourse in relation with that on digital rights, and I am delighted that the splendid report by Giulio Coppi – Mapping humanitarian tech: Exposing protection gaps in digital transformation programmes – was released in time for the book to engage it. With over 400 documents reviewed, 40 interviews and many months of work, the report offers a groundbreaking illumination of the role of private companies in digital humanitarianism, offering up-to-date statistics and highlighting the explicit inscription of digital ID work in the domain of digital rights.

Ration Shop, Karnataka, April 2018

One of the topics reviewed in my final chapters is that of algorithmic welfare systems. A brilliant article published by Tapasya, Kumar Sambhav and Divij Joshi in January 2024 brought the topic back to my attention: I say “back”, because Bidisha Chaudhuri’s work on programmed welfare already illuminated the algorithmic aspects that the Aadhaar biometric database allowed infusing in India’s Public Distribution System (PDS). Tapasya et al., however, bring the argument forward by studying Samagra Vedika, an algorithmic system that, implemented in India’s state of Telangana, crosses different databases to determine the list of people eligible for foodgrains and other goods under the PDS. With inbuilt assumptions of algorithmic fairness, in this case meant as fairness towards users of a programme on which millions of households depend for sustenance, the algorithmic system flags what it determines as non-eligible beneficiaries, who due to asset-owning, incomes or other factors are barred from receiving essentials provisions under the PDS.

But the algorithm seems to replicate the flaws that research on the Aadhaar-based PDS, and on the shift to a targeted PDS before it, already denounced. The story of elderly widow Bismillah Bee, told in the opening of the article, is illustrative of this matter: her deceased husband, a rickshaw driver, was erroneously tagged as a car owner, resulting in the family losing PDS entitlement. Research on the case revealed how the man had been mistaken for a car owner with a name similar to his: authorities chose, however, to follow the algorithm’s flawed decision, even when faced with the wrongful input mechanism. The case, note the authors, is epitomic of many more: from 2014 to 2019, the state of Telangana cancelled over 1.86 million existing ration cards and rejected 142,086 new applications without notice. While the technology was claimed to be of high precision, a Supreme Court-imposed re-verification of cancelled cards in April 2022 suggested that at least 7.5 percent of the cards were wrongfully rejected, meaning an equal number of households becoming unable to access basic food provisions.

The argument of continuity, according to which the algorithm barely replicates the exclusions of needy users generated in the PDS for many years, has some foundation. The shift to a targeted system in 1997 meant that a capped number of entitled households had to be established for each Indian state, determining, in turn, the exclusion of many genuinely entitled households from a system of social protection. A logic prioritising the fight to wrongful inclusions has later pervaded the introduction of the Aadhaar-based PDS: algorithmic systems like Samagra Vedika, it can be argued, are designed to replicate a process that excludes those deemed as non-entitled, without necessarily offering opportunities of redressal. Bee’s case is, again, epitomic of a logic that bars the “flagged” person from any provision, without demonstrating the same anxiety to rectify wrongful exclusions. While the “programmatic welfare” theorised by Chaudhuri is indeed qualitatively different, it can be said to reinstate the same exclusionary logic of previous systems.

And at the same time, there is more to the narrative than replication. While I was finishing my book, the book “Algorithms of Resistance: The Everyday Fight against Platform Power”, by the brilliant Tiziano Bonini and Emiliano Trere’, was published: the book unpacks exactly how algorithmic violence can be turned upside down, by communities building resistance through the same algorithmic tools that led to injustice. That is true for algorithmic welfare as well: the digitally-infused nature of the system means that actors willing to instill fair values, such as the data-for-dignity proposition theorised by Joan Lopez for Colombia’s prosperity scoring system Sisbén, have the scope and material ability to do so. This turns the discourse on algorithmic welfare from one of injustice into one of hope: it is the hope that, by recognising algorithmic leveraging in social welfare systems, exclusions can be combated rather than enabled, and fairness can be designed directly in the programmes rather than bypassed. It is the same hope that, referred to digital identity, has informed my book Unfair ID.

This book will be published in a time of tension between injustice and resistance, and stories of algorithmic welfare have so much to add to it. I cannot imagine a better site to imagine new routes to rebuilding fairness!

Digital ID Research under War Crimes

I long hoped that research in the digital ID space could be an epitome of justice, and that overcoming unfair forms of identification and authentication could lead to strengthening fundamental rights where these are threatened. The state of the world does not support this conclusion. Denial of digital rights is reinforcing existing forms of oppression: the ongoing war on Palestine, in which the Gaza strip is subject to a long telecommunications blackout, is dramatically epitomic of this. The Internet shutdown that started in October 2023 goes in parallel with heavy bombing and the suspension of aid, with over 11,000 civilians killed since the beginning of Israeli bombings in October and many more injured, missing or in extremely precarious life conditions. There is no way this scale of violence and injustice can be mitigated by any words.

With the genocidal violence that characterises it, the war of Israel on Gaza is latest in a long string of attacks to Internet freedom. In its latest yearly report on the matter, Access Now recorded 187 Internet shutdowns in 35 countries, with 48 shutdowns coinciding with human rights abuses in 14 countries in 2022 alone. With its promises associated to welfare and social assistance, digital ID has been at the center of global development agendas: and at the same time, it cannot be seen in isolation from the digital rights of the people it affects. Internet shutdowns ontologically show this: denial of digital rights results in negation of the most fundamental human rights, the same that instantiations of unfair ID have illustrated. The production of injustices through unfair ID epitomises a much larger phenomenon, in which violations of rights in the digital space are further reified through IT artefacts.

Wall painting, Aida refugee camp, July 2013

In this landscape of rights’ violation and fight for redressal, a data justice perspective supports the study of complex digital ID phenomena. It was the data justice perspective that equipped the lexicon used in this blog, and that helped make sense of a literature, that on digital ID, whose sparsity across fields can be difficult to face. Different thematic foci in digital ID tend, in addition, to create islands of knowledge that do not openly communicate with to each other. This jeopardises its ability to see the impact of digital ID on people, and of voicing the perspectives lived by its users. In a world in which “doing digital ID research” is sometimes perceived as focusing on providers, the current times remind how the people’s perspective is the one that most directly and dramatically reflects digital injustice.

Living in a time when war crime jeopardises the most basic human and digital rights involves the responsibility to investigate the genesis of injustice. Discussions at the Surveillance Studies Network and the Data Justice Conference have revolved around this point, bringing to light new conceptual lenses to make sense of the phenomenon. Margie Cheesman’s concept of on infrastructural justice, conceived as the uneven benefits which sociotechnical systems instantiate, comes to mind in noting that injustice- rather than incidental – can be directly inscribed in the artefact that produce it. This questions the idea of a “dark side” of technology, presenting harm as a peripheral “side” effects: denial of human rights can be a direct product of technology design, a product which Information Systems research has a direct responsibility to engage.

The two lenses of data justice and data activism, are inseparable from each other in the narration of unfairness in digital rights. Data justice affords unpacking the dimensions of digitally-induced harm: notion of data activism illuminate routes to resistance, and the shapes it takes in an increasingly datafied world. Dimensions of data activism combine, note Milan and Van der Velden (2016), proactive and reactive components: while reactive data activism defines as ““tactics of resistance to massive data collection”. proactive data activism illuminates data advocacy, opening routes to investigate resistance enacted by the same digital tools that perpetrate oppression. The epistemic power of data activism is especially needed at these times of violence.

In a world where digital tools are plied to violence, and on its blind perpetration over civilians, the power of anti-injustice artefacts provides a light of hope in building a road to freedom. Combating digital injustice means taking stock of harm, but also of the light of resistance that communities across the world have enacted. It is in this space that data justice and data activism, as epistemic tools for direly needed research, are inseparably interlocked.

Against Reverence

Some of the most groundbreaking innovation in digital identity research lies in the work of early career scholars, who develop key theoretical concepts to navigate the field. In this post I expand on how work conducted by early career colleagues, met before and during this summer of conferences, has deeply informed my understanding of digital ID.

I was raised in an academic field, Information Systems, where “standing on the shoulders of giants” is a not-so-silent, overarching precept. For as much as it can be questioned, the precept informs the field: senior editor panels at conferences see packed rooms of early-career colleagues, striving to find a space to grasp wisdom from seniors and even, in the wildest dreams, bag the chance to ask a question to the “big ones”. Innovative, thoughtful ideas are brutally bashed for insufficient reference to “established” work, with the more or less (often less) kind invitation to return to the drawing table. The idea of seniority as reverence has affected the field so much that, in my observation, we even refrain from questioning it anymore. When bullied by a senior scholar in front of a whole academic seminar audience, simply for daring to ask a question, 28-year-old me cried all the way through the bus route to my north London sublet, thinking that what happened was just right. If the senior implied that my question was dumb, so that definitely was. And I deserved being laughed at by the whole seminar room, a laugh I won’t forget, in fact I can still hear it.

What I didn’t know back then is how wrong the colleague was. And not the person specifically, but the established, deeply ingrained idea that academic knowledge, to be accepted, should come from all-established “seniors”. A recurring thought after coming back from conferences, where the packed-room-for-seniors-panel situation is a regular (and a whole post should be written on the demographics of such panels, and the hierarchies it reinforces), last year I wrote a Medium post on the topic. I reiterate it this year, at the dawn of one of the largest convenings in my field: scientific knowledge comes from mutual learning, rather than from the blind reverence that the field ingrains in us. And as I work on finishing my first book, Unfair ID, I am beyond blessed with learning from colleagues that, having come recently to the field, are bringing the most important innovations in it. Below are just some of my notes from recent, massively enriching conference encounters.

As someone who started studing digital ID during her PhD, understanding biometric tech markets has always been a key challenge, and one that I’ve never completely gotten around. The digital identity solutions market is forecast to grow from US$23.3 billion in 2021 to US$49.5 billion by 2026: and at the same time, technologies of biometric identification are diffusing rapidly in state and humanitarian domains, where they purport to provide the type of technology required by development objectives, epitomised by SDG 16:9 on legal identity. A report on the status of biometric humanitarianism released just this month is clear on the matter: private tech, on which large humanitarian schemes are already predicated, is becoming even more key to ensure the authentication-authorisation nexus on which targeted schemes are based. Two years ago a blog post by Aaron Martin and Linnet Taylor, written just after the Identity Week in London, noted how private players’ narratives explicitly depicted human development as a business opportunity, leaving a big question mark on the interests of the protected.

It was in my effort to get around the logics of biometric markets that I met Carolina Polito, whose work with Cristina Alaimo was presented at the European Conference on Information Systems in Kristiansand. Their paper “The Politics of Biometric Technologies: Border Control and the Making of Data Citizens in Africa” uses the notion of “politics of the biometric artefact” to understand the role of private contractors in the establishment of border control markets. Developing the idea of “Do artifacts have politics?” written by Langdon Winner in 1980, the work presented by Polito moves significantly beyond it: the notion of “politics of biometric artefacts” is deployed to understand the complex enmeshment of border control with economic profit, inviting questions on “who benefits” from the marketed deployment of advanced tech recognition in borders. Another recently released report, “Artificial Intelligence: The New Frontier of the EU Border Externalisation Strategy”, explores the role of technologies marketed as AI for border externalisation, a widely used strategy to stop migration in EU member states.

More knowledge is developed on topics that, such as the effects of biometrics on social protection programmes, have long been established in the field. A widely research digital ID platform, India’s Aadhaar as incorporated in the country’s largest social protection programmes, has seen debate among researchers on the extent and measurability – rather than actual presence – of exclusions associated to biometric recognition. Limited work exists, instead, on the Aadhaar-Enabled Payment System (AePS), an infrastructure that allows using Aadhaar as ID verification to access one’s bank account and perform key operations. Placed in the domain of cash transactions, work on AePS is yet to discuss the presence and depth of exclusions associated to it, and the infrastructural and societal drivers associated to more or less successful use of the service.

Ration shop, Kolar district, Karnataka, April 2018

Especially, limited research has delved into the transaction failures that have affected AePS. It is here that Malavika Raghavan, currently a PhD candidate at the LSE, has published detailed insights on the levels at which AePS transaction failures occur: her work notes three levels – consumer/BC, infrastructure, and bank – at which failure can take place, quantitatively estimating the majority of failures to take place at the infrastructural level. Noting that infrastructure-level transactions, which include biometric mismatches, are prominent, Raghavan sheds light on an important side of biometric architectures: the very material, impoverishing consequences that failures embedded in the technology may yield. Costanza-Chock’s (2020) important book on Design Justice comes to mind, reminding how injustice, rather than incidental, can be inscribed in the very body of the technology.

Still on the topic of biometrically assisted social protection, prosperity scoring – embedded, paradigmatically, in Colombia’s System of Identification of Social Program Beneficiaries (Sisbén), – has gained foothold as a route to designing targeted anti-poverty systems. Sisbén, a unified household vulnerability index built to identify poor households, is based on a continuum vulnerability score (0-100), assigned to each household in the light of socioeconomic variables. Calculation of the index is based on three components: a socioeconomic survey to collect data on households; a welfare measure to assess vulnerabilities; and software to perform household level calculations.

It was thanks to Joan Lopez, currently a PhD researcher at Tilburg University, that I encountered Sisbén, and the problematic evolution that the system known as Ingreso Solidario – Solidarity Income – has displayed with its establishment during the COVID-19 pandemic. The goal of the new subsidy programme was to identify subjects not served by other schemes: data from across government databases were hence crossed to identify needful households, resulting in opaque combinations of data from different repositories. But in the machine-led process of entitlement assignation, the combination of information to make entitlement-assigning decisions remained blackboxed to users, resulting in uncertainty on key aspects of a decision strongly marking the prosperity scoring of households and the effects resulting from it.

As the colleagues featured in these notes, a vast group of researchers I learn from every day are conducting doctoral or postdoctoral work. My privilege is to be in the position to remain constantly updated just by getting to hear their work, discuss and cite it. None of these scientific advantages is made possible by positions that silo research into seniority categories. In fact, richness lies in drawing on an amazing pool of colleagues from across domains, geographies and fields every day.

My happiness from having joined the adventure of Sociotechs, a new podcast launching in August with Tejas Kotha, stems exactly from the need of siloed barriers to be broken. In a series of episodes currently in production, we speak to colleagues from NGOs, PhD programmes, academic institutions, and other domains to achieve one overarching goal: learning from each other on topics pertaining to technology and society, looking straight at the bases of data-induced harm and routes to overcoming them. Only through mutual learning can this be achieved.

And to the senior academic from my opening story, I want to tell they were so wrong. That early career research can effectively produce not only important results, but those that drive the field and shape it into what it is. The field needs to do a lot more to acknowledge the importance of innovation brought by early career work, and begin making sense of the huge loss connected to not doing so.

Silent Injustice

A substantial part of the discourse on injustice induced through digital ID is about exclusion of genuinely entitled users from basic-need services. While exclusion is important in the digital ID narrative, other, more silent form of injustice also need  to be seen and studied.

I am on the train to the Data Justice Conference 2023, taking place at the Data Justice Lab in Cardiff, and I simply cannot wait for two splendid days with the Data Justice research community. My last time attending in person was in 2018, and from then on, the community has grown in spectacular ways. The programme of the next two days shows an epistemic growth featuring talks on civic participation, data solidarities, and increasingly – more so than in the past – engagements with low-income contexts where injustice is corroborated by structural vulnerability. I can’t wait to discover how much massive learning can take place in just two days of work!

My train trip – on the beautiful Welsh coastline – is also an occasion to reflect on the state of the field, or at least, on the digital ID-centered section of the field to which my work contributes. As academic engagements with data justice have changed, and ID research has arguably become increasingly prominent in them, attention towards data-induced harm as produced by digital identification and authentication has grown. Paradigmatic this is the dissection of harms induced by digital identity schemes detailed in the report “Paving a digital road to hell: A primer on the role of the World Bank and global networks in promoting digital ID”, published in 2022 by Katelyn Cioffi, Victoria Adelmant and Christian van Veen’s team at the Center of Human Rights and Global Justice at New York University. The title of Cioffi’s presentation in the upcoming Data Justice conference builds on the reports’ findings to ask a direct question: can collective rights and remedies lead to more effective recourse for digital ID-enabled harms?

As I prepare for massive learning over the next two days, two considerations come to my mind. The first is that data justice research – charged by the usual suspects as being “destructive” of tech-for-good ideas, rather than “constructive” of new ones – is increasingly proving to be the other way round, and to combine a focus on tackling the root causes of injustice with one that actively seeks routes for overcoming them. Cioffi’s quest for “collective rights and remedies” as a repair to harm is a powerful instantiation of this. More instances can be drawn in the recent Data Justice book written by the Conference’s organisers, whose chapter 8 pointedly looks at data and collective action, and in a programme that looks forward to new solutions and fields rather than backward. Having my amazing students Alina Krogstad, Guro Handeland and Johannes Skjeie coming in to present work on printer-enabled scannable codes in Malawi’s rural community clinics only reminds me of the importance of building solutions to combat unfair ID.

A second consideration is less popular. It is on the extent to which user exclusions, sociotechnically enabled by ad hoc artefacts, make up the bulk of harms induced by digital ID.

On the one hand, exclusions are a fundamental part of the problem. Quantitative works – estimating  the size of beneficiary populations that, while entitled to a certain scheme, cannot access it due to its biometric turn – offer precise estimates of how exclusion plays out. Instances regarding my main research object, the incorporation of India’s Aadhaar in India’s Public Distribution System (PDS), reveal significant parts of beneficiary population being biometrically barred from services they could previously access. As a recent award-winning paper by Pragyan Thapa, Devinder Thapa and Øystein Sæbø reminds us, quantification is a powerful epistemic device in ICT4D, and one that speaks loudly about the meaning of exclusions for users that biometric infrastructure has barred from access to programmes direly needed for household sustenance.

On the other hand, exclusions do not act alone. It is the hidden layers of injustice that I believe need an eye on, and that, I am convinced, this conference will be an important occasion to explore.

Street market workday, Kolkata, September 2015

One such type is informational injustice. This starts from the very fact that users may not, at all, be in the position to question how their data are treated once captured for access to vital programmes, such as humanitarian assistance or social protection schemes. When discussing India’s Aadhaar’s data handling with beneficiaries of the national Public Distribution System back in 2018, Soumyo Das and I most often received blank looks: in the state of Karnataka where we did our work, Aadhaar’s biometric identification was mandatory for accessing essential food commodities, so which questions could users have at all asked? It is the inability to ask questions that paves the way for further injustice, such as the finalistic application of biometric technologies for transition to a cash transfer architecture that encounters vast suspicion from users. Structural inability to ask questions, in turn built up by putting users in the either-or position of being biometrically profiled or not subjects of aid, creates the condition for an informational injustice that is more silent and sneaky than the blatant effect of exclusions.

Design injustice, which I first encountered by meeting Sasha Costanza-Chock in this same conference in 2018, can be equally hidden through artefactual layers. This is a conference of technologies that look largely user-friendly: computerised databases meant for the “social inclusion” of forcedly displaced persons in host societies. Digital wallets meant for “empowerment” through “financial inclusion” (if you are here, make sure you attend Margie Cheesmans presentation on the lived realities of refugees subjected to digital wallet-based payments for cash-for-work programmes). Digitised workplaces. All technologies designed in the best of the tech-for-good spirits, only to see a reversed narrative once the user – in a deconstructed-historiography  fashion – is given voice, to tell the real story behind the fashionably-tech architecture. This is where we hear the desperation of social protection programme users barred from access to vital food security schemes, the helplessness of workers left without information on which working days their biometrically-authorised payment did actually pay. This is where user-friendly design hides very precise artefact policies, which the next two days will be an important occasion to delve in.

Exclusions are an important side of injustice. I want to dedicate this conference to study what lies behind them.

Doing Epistemic Justice in Digital ID Research

Doing digital identity research implies the responsibility of preserving epistemic justice. This means looking right in the eyes of the data-induced harm that permeates our phenomena of interest, connected to the conversion of individuals into machine-readable data. For me, it also involves being a constant learner, and one that writes ID stories straight from the voices of the digitally identified.

I am currently developing an epistemic justice protocol for the stories narrated in my forthcoming book. (If you haven’t seen me on these pages for a couple of months, well… that is what I have been up to, primarily. That and many other things in and beyond the digital ID galaxy, including freezing my eggs, for instance.)

Designing this protocol is easily one of the hardest things I’ve ever done as a researcher. Combating epistemic violence, an act that Galvan-Alvarez (2010) defines as an act of violence exerted both on knowledge and through it, is an endeavour that involves deploying all our tools to voice narratives silenced by very precise lines of discourse. To put it with Spivak (1988), combating epistemic violence means deconstructing historiography, re-narrating history from the point of view of marginalised voices that a mainstream narrative silences. In this interstice of time in between two large digital ID conferences, ID for Africa that ended on 25 May and the Identity Week Europe starting 13 June, Spivak’s (1988) words resound in my mind every single day.

My stance is that narrating stories from a recipient’s perspective is the heart of epistemic justice in digital ID research. And this means the gist of ALL stories. Stories of happiness with digital ID, of recipients of welfare schemes that with digital identification have found simpler access to the programmes they needed. Voices of users that welcomed the “anti-corruption” power of biometrics. And at the same time, voices of users that became barred from accessing food and cash programmes because of glitches with digital authentication, or more radically because of issues at the stage of registration of details. This is why spending days, months sitting at the interface between the person and the technology, interface that in my work has meant being in ration shops, acquires primary importance. Far away as it may be from the echelons of power, the digital ID-mediated ration shop is the space where the person is protagonist, and where justice for their voice can be restored.

Barbed wire fence, Bethlehem (Palestine), August 2012

But there is more to that. The mandate of doing epistemic justice involves the mandate of being a learner.

Not an occasional one. A learner every day. One that encounters concepts, processes them, and builds an inventory of the stories that their being-in-the-world involves. Let me give you a few examples of what this has meant for me, in this week alone.

Yesterday colleagues Emrys Schoemaker, Aaron Martin and Keren Weitzberg have published a piece titled Digital Identity and Inclusion: Tracing Technological Transitions, which discusses the intersection of digital ID with the perpetration of surveillance, exclusion and privacy breaches. The article illuminates how history has evolved from “Big ID” – an expression coined by Access Now to discuss centralised, state-level ID systems – to decentralised architectures that promise “empowerment” on the very gnoseological basis of “decentralisation” (see Cheesman’s brilliant PhD thesis on this topic). Keeping a rights’ perspective (the article ends with the highlight of the importance of “recentring rights”), the article illuminates a history whose ongoing phase features superapps, whose data assemblages allow the construction of ready-made, commercially-usable profiles of digitally identified people.. Bidisha Chaudhuri’s illuminating piece on Programming Welfare, breaking ground on algorithmic management in social protection, comes to mind.

Yesterday as well, I have had to the opportunity to learn from Katherine Wyers on the anti-LGBTQ+ law recently passed in Uganda, which introduces the death penalty or life imprisonment for “certain same-sex acts”, along with more measures criminalising people who are identified as LGBTQ+. Such measures, argues Wyers in her latest blog post, have serious implications for organisations involved in collecting, storing and sharing identity data. Wyers’ post is perhaps the strongest reminder of the importance of preserving epistemic justice that I had in many weeks. Digital ID, the object of our research, is being used to search, find, profile users, and when the politics of the artefact is brought to the extreme, it can be used to enforce laws that de-humanise people instead of granting rights. The work of Keren Weitzberg on the loss of rights caused by double registration in Kenya, the work of Eve Hayes de Kalaf on the making of citizens into foreigners in the Dominican Republic, the work of Lucrezia Canzutti on ethnic Vietnamese’s access to citizenship in Cambodia, come to mind.

And being a learner is inseparable from being a fieldworker, I believe.

The picture accompanying this post was taken in Aida refugee camp, in the West Bank of Palestine, in 2012. The barbed wire you see separates a land – Palestine, living under a military and settler occupation that causes violence and death since 1948 – from the outside, an outside where Palestinian refugees have no rights to be. My time in Palestine taught me to ask questions. Why, how, through which technologies is such efferate violence taking place. Only knowing how will we ever know how to fight it.

Doing epistemic justice means being rigorous, to the finest detail. And we have a responsibility to it.

ID Design Meditations

That machines embody specific forms of power and authority won’t come as a surprise to this blog’s readers. Few examples are more renowned than Langdon Winner’s article, Do Artifacts Have Politics?: referring to “technical arrangements as forms of order”, Winner points to the low height of overpasses on the parkways to New York’s Long Island, noting how we are prompted to interpret details of form as devoid of political meaning. Published in 1980, Winner’s essay shows how master builder Robert Moses, responsible for the design of bridges, devised them with the purpose of discouraging the presence of buses, most frequently used by poor people and Blacks. The classist and racist valence of Moses’ bridges grew to become a core symbol of political artefacts, and indivisibly from that, of the harm suffered from targeted people as a result of such politics.

Anti-poverty artefacts, I argued in earlier work, are just as political as Moses’ bridges. By anti-poverty artefacts I mean all artefacts that, in more or less material forms, participate in the design and implementation of anti-poverty policies. While anti-poverty policy is often formulated at the national level, specific provisions can exist, such as the Green Card Scheme formerly adopted in India’s state of Karnataka to enhance food subsidies for the poorest. National policies are, at the same time, influenced by supranational directives and by the global development agenda. All artefacts participating in anti-poverty policies – whatever their nature, and the level at which they operate – qualify as the anti-poverty artefacts that influence, in more or less direct ways, the lives of recipients.

We have already encountered a few anti-poverty artefacts through this blog’s pages. The ration card that affords people’s access to India’s Public Distribution System (PDS) – and whose absence denies it – is an instance of that. Let us keep in mind that artefacts do not need a digital, or at large technological, component to qualify as such: a card, which embodies people’s poverty status and therefore their entitlements, plays a major role in anti-poverty policy. Aisha’s story palpably told her frustration in being unable to access food rations as she lacks her card: not only does the card participate in the programme’s implementation, but it is a material, essential requisite for users to access their provisions.

My work on India’s Aadhaar has thrown me deeper in anti-poverty artefacts. The process of accessing rations through Aadhaar-based authentication, in turn linked with the state-level ration card database, is populated with them. Rather than operating in isolation, anti-poverty artefacts sort their effects in combination with each other. As noted by Jean Drèze, Aadhaar requires “multiple fragile technologies to work at the same time”: the PoS machine, the biometrics, the internet connection, remote servers and often other elements such as the local mobile network. Beyond the fragility of the system, the point made here pertains to the concerted action behind anti-poverty artefacts, which work in combination with each other in producing, or seeking to produce, their intended policy outcomes.

This brings us back to exclusions from the Aadhaar-Based Biometric Authentication System (ABBA). The question is in terms of the politics that systems of the type of ABBA, which subordinate access to biometric authentication, embody within themselves. In other words, what is the politics of the anti-poverty artefacts that ABBA consists of?

Taluk Supply Office, Taliparamba (Kerala), November 2011

Answering this question requires a reminder of the two main types of errors – defined as exclusion and inclusion errors – in which targeted social protection systems can incur. An exclusion error occurs when genuinely entitled users are excluded from the system. Conversely, an inclusion error occurs when users who are not entitled to a system are erroneously included in it. Owners of bogus ration cards, presented by the Justice Wadhwa Committee Report (2010) as a major problem of the PDS across states, enable an inclusion error by affording non-entitled users to illicitly access the system.

The biometrically enabled PDS enforces a very specific policy measure. On the one hand, its action is tailored to combat the inclusion errors that have plagued the system for years. Denial of access to the non-entitled is written into the functioning of the technology: a person without a ration card, or a person which the system does not recognise as a registered user, cannot be made to access the PDS. In its virtue, the technology carries the message that access is to be restricted to validly identified and authenticated PDS users. While extensions can be added into the system, the heart of the technology subordinates access to successful authentication, equating its failure with non-entitlement.

The system’s architecture answers the question on the artefacts’ politics. Designing a technology that bars access to the non-recognised, but takes no provisions for the erroneously excluded, the artefact prioritises the fight to an inclusion error, widely recognised in national policy as responsible for leakage. The history of the PDS gives a strong rationale for such a policy: with the transition to a targeted system in 1997, and the spike in diversion that ensued from it, securely identifying genuine beneficiaries became prioritarian. While such an argument has been questioned on the grounds that exclusion errors were at least equally pressing, the purpose of tailoring the system to fighting wrongful inclusion has shaped the technology into what it is today: a technology subordinating a universal right to successful identification and authentication of users.

Two notes of caution must be made on this argument. First, biometric recognition (and the database of biometric details at the basis of it) is not a fundamental premise of the fight to inclusion errors. Ration cards, which remained starkly non-digital through their lives as artefacts, play the same role, subordinating people’s access to the system to recognition of the person as a valid beneficiary. The fact that in the non-digital PDS recognition was based on the person’s name and photo, rather than their biometric details, does not alter the nature of the technology, which is still aimed at delivering rations only to the person that qualifies as an entitled beneficiary. The Aadhaar-based PDS only inscribes the same politics of physical ration cards into a biometric artefact, planned to be more precise and accurate.

Second, and crucial, the politics of anti-poverty artefacts in the PDS does not need Aadhaar. There are diverse technologies through which biometric authentication can be performed, an early theorisation of the same is found here. In a study conducted in Karnataka in 2014-2015, Amit Prakash and I researched a pre-Aadhaar system that collected biometric details for PDS recipients in Karnataka, matching them with the details collected in the state’s ration card database. Selected ration shops had been provided a PoS machine augmented with a weighing scale, where ration dealers weighed goods associated to each ration cardholder. While antecedent to ABBA, the system played its exact same function and was described to us by its creator, Karnataka’s Food, Civil Supplies and Consumer Affairs Secretary Harsh Gowda, as a major change drive in the state’s fight to corruption.

While exclusions are most surely unintended in the eyes of policymakers, they are directly produced by how the technology is designed. A technology that writes an anti-inclusion error policy into the artefact – rather than a policy that averts exclusion errors – produces the outcomes that people featured in this blog suffer, and that only augmentation of the original artefacts with extra tools can address. Legal injustice, which makes universal rights conditional to well-functioning digital authentication and identification, is the epistemological underpinning of such artefacts.

Not a Dark Side

We read about the “dark side” of digital identity platforms, as if it was “side” effects produced by their design and implementation. But the “dark side” narrative appears flawed, in the light of design injustices that are deeply inscribed into digital ID architectures. In this light it seems appropriate to talk about a “dark matter” of digital ID, rather than just a “side” with incidental effects.

The terminology of “unintended consequences” is widely used with reference to the overarching research stream on digital platforms for development. Acronymised as DP4D, and widely popularised across Information Systems and cognate literatures, the DP4D orthodoxy rests on a much wider logic than that of specific, database-centred digital identity platforms. Illustrated in a recent Special Issue of the Information Systems Journal, the platforms-for-development orthodoxy is articulated across transaction platforms, used to connect the demand and supply for a given product or service, and innovation platforms, used to build a set of complements on a common core. Qualitatively different in nature, transaction and innovation platforms share the generative features that, argues the orthodoxy centred on them, allows customising content to meet the needs of recipients across contexts, including sites of marginalisation where basic needs are unmet.

Openly spelling out that orthodoxy, the Special Issue in point at the same time problematises it. It does not need a focus on digital ID – and on the systemic injustices narrated through this blog’s pages – to note the shortcomings of a platforms-for-development view, especially where existing oppressive logics are inscribed in the design of the platforms. As a core example, digital labour platforms hold the promise of generating new jobs in historically jobless contexts, creating income opportunities and windows of hope for deprived individuals. But at the same time, research on such platforms points to the subalternity inscribed in their technology, which subjects workers to inhuman rating systems and pushes logics of right deprivation that the platform, by design, enforces. The vast body of work on digital labour platforms casts a long shadow on the platforms-for-development philosophy, a shadow reflected by worker’s narrations of dehumanising experiences lived on a daily basis.

Research on digital identity reflects the same orthodoxy. After all, digital identity platforms present all features of innovation platforms: they have a core constituted by a database storing demographic and, increasingly,biometric data; they rely on boundary resources which enable the construction of complements on the same core, and in virtue of that, they are seen as generative artefacts that can fit the needs of particular peoples and countries. But it is the same orthodoxy that crumbles through these pages: we already saw how legal injustice is inflicted on digital ID users, subordinating rights as essential as food, shelter and protection to a well-functioning nexus of authentication and authorisation. We have also seen, however, how legal injustice – and the exclusions stemming from it – is not an unintended consequence of the system, but an integral part of its design. As noted in our work on the politics of anti-poverty artefacts, biometrics are incorporated in them with the precise logic of combating forgery at the expense of the exclusion errors generated by the authentication-authorisation nexus.

Cash-based transactions within biometrisation. Bengaluru, April 2019

It is crucial to see legal injustice, and the way it turns into design injustice through the making of artefacts that incorporate it, as embedded in the features of digital ID systems that subordinate authorisation to successful authentication. In turn, successful authentication of users is predicated onto registration, the first step that allows the registration of user credentials into a central identity database. The point is, for as long as centralised ID systems remain predicated on the conditionality of authorisation to access food, shelter, vital rights associated to citizenship, to authentication processes whose completion is unsure (and in some cases, intrinsically fragile), the logic seeing exclusion errors as the “less evil” as compared to wrongful inclusions will be designed in the artefact itself. And that is not a “side”, inasmuch as platform-for-development literature has to say. It is the matter of the system, which resides in technology design itself.

My doubts on the “dark side” have several ramifications. What I especially hope that my field, Information Systems, treasures of this narrative is the problematisation of a term that has become all too common, all too used, all central to research that wants to illuminate the “unintended consequences” of technologies designed for good. And if none of this was unintended – if, as it is for digital identity platforms, the “side” was effectively designed into the inner “matter” of technology? It is too dangerous, and beyond naïve to imagine that the “side” may be a technical problem solved through fixing some glitches. Research on hunger deaths associated to biometric identification illustrates the dramatic consequences of such a narrative, and the irresponsibility of promoting naïve argumentations around it.

It is a dark matter that we must investigate, not an incidental side. Research on digital platforms for development, whatever angle it takes, cannot and should not bypass this point.

Må identiteten bli digital?

For oss som har flyttet til et annet land, og egentlig for meg som har bodd i forskjellige land i livet mitt, er identitet et omstridt tema. Livet mitt i Oslo er befolket av mennesker fra mange kulturer, med ulike historier som brakte dem hit. Vår identitet formes i løpet av livene våre, og fortsetter å utvikle seg etter hvert som vi flytter til forskjellige steder, og tilegner oss erfaringer i dem alle.

Jeg har en personlig interesse for å studere identitet. Min forskning er på digitale identitetssystemer, en type systemer som, ved å matche folk til deres rettigheter (for eksempel subsidier eller tjenester), har som mål å garantere et godt liv for alle. Digital identitet er utformet for å komme mennesker til gode, og samtidig gjør den seg skyldig i urettferdighet som tilbys i mange deler av verden. I Norge kan blokkeringer i den viktigste digitale identitetsinfrastrukturen (BankID) blokkere tilgangen til mange grunnleggende tjenester, noe som gjør viktige transaksjoner svært vanskelige for brukerne. Selv om digital identitet er designet for å hjelpe mennesker, resulterer den ofte i nektelse av viktige rettigheter, som min forskning på Indias største matsikkerhetssystem har vist.

(Bilde: Giulio Coppi, December 8, 2021)

Urettferdighet produsert av digital identitet er desto mer alvorlig for mennesker basert i det såkalte globale sør. I utviklingsland spres digitale identitetssystemer som forvandler personen til data raskt. Dette er gjort i navnet til såkalt «utvikling», for å hjelpe utviklingsland med å bygge systemer som kan identifisere mennesker på en sikker måte, slik at de kan motta de tjenestene og den humanitære bistanden de trenger. Men bak dette løftet ligger en mye mer kompleks situasjon. Mange trengende mennesker har blitt ekskludert fra viktige tjenester, av de digitale identitetssystemene som lovet dem bedre tilgang til tjenester. Dette har resultert i en forverring av sultsituasjonen, og har også blitt assosiert med sultdødsfall i den indiske staten Jharkhand.

Andre problemer dukker opp med de dårlige formene for overvåking som digital identitet gir. I juli 2015 ble det europeiske registeret over asylsøkere (Eurodac) gjort interoperabelt med Europol, som er systemet som forbinder politimyndigheter i hele Europa. Det fører til at asylsøkere hvis identitet er digitalisert gjennom biometri, blir automatisk profilert av politiet, noe som gjør det vanskelig for dem å bygge et nytt liv i vertslandet. I tillegg eksisterer utelukkelsene som er muliggjort av nasjonale biometriske systemer, også innen humanitær bistand, med det resultat at flyktninger risikerer ikke å motta varene de trenger.

Men må identitet vår bli digital?

På mange måter er ikke den levde opplevelsen av identitet digital i det hele tatt. Tvert imot er det veldig materiell: identitet utvikler seg gjennom våre livserfaringer, på måter som digitale systemer rett og slett ikke kan fange opp. Digitale systemer kan konvertere oss til data og også produsere urettferdighet, men det blir veldig vanskelig for disse systemene å fange materialiteten og følelsene som identiteten vår inneholder. Hjemmefølelsen jeg har følt meg siden min første dag i Oslo, som føles som hjemmet der jeg er oppvokst, kan ikke fanges opp av noe digitalt system, tror jeg.

Systemer kan konvertere oss til data. Men disse dataene vil aldri male et helt bilde av følelsene som gjennomsyrer vårt levde liv.