ID Design Meditations

That machines embody specific forms of power and authority won’t come as a surprise to this blog’s readers. Few examples are more renowned than Langdon Winner’s Do Artifacts Have Politics?: referring to “technical arrangements as forms of order”, Winner points to the low height of overpasses on the parkways to New York’s Long Island, noting how we are prompted to interpret details of form as devoid of political meaning. Published in 1980, Winner’s essay shows how master builder Robert Moses, responsible for the design of bridges, devised them with the purpose of discouraging the presence of buses, most frequently used by poor people and Blacks. The classist and racist valence of Moses’ bridges grew to become a core symbol of political artefacts, and indivisibly from that, of the harm suffered from targeted people as a result of such politics.

Anti-poverty artefacts, I argued in earlier work, are just as political as Moses’ bridges. By anti-poverty artefacts I mean all artefacts that, in more or less material forms, participate in the design and implementation of anti-poverty policies. While anti-poverty policy is often formulated at the national level, specific provisions can exist, such as the Green Card Scheme formerly adopted in India’s state of Karnataka to enhance food subsidies for the poorest. National policies are, at the same time, influenced by supranational directives and by the global development agenda. All artefacts participating in anti-poverty policies – whatever their nature, and the level at which they operate – qualify as the anti-poverty artefacts that influence, in more or less direct ways, the lives of recipients.

We have already encountered a few anti-poverty artefacts through this blog’s pages. The ration card that affords people’s access to India’s Public Distribution System (PDS) – and whose absence denies it – is an instance of that. Let us keep in mind that artefacts do not need a digital, or at large technological, component to qualify as such: a card, which embodies people’s poverty status and therefore their entitlements, plays a major role in anti-poverty policy. Aisha’s story palpably told her frustration in being unable to access food rations as she lacks her card: not only does the card participate in the programme’s implementation, but it is a material, essential requisite for users to access their provisions.

My work on India’s Aadhaar has thrown me deeper in anti-poverty artefacts. The process of accessing rations through Aadhaar-based authentication, in turn linked with the state-level ration card database, is populated with them. Rather than operating in isolation, anti-poverty artefacts sort their effects in combination with each other. As noted by Jean Drèze, Aadhaar requires “multiple fragile technologies to work at the same time”: the PoS machine, the biometrics, the internet connection, remote servers and often other elements such as the local mobile network. Beyond the fragility of the system, the point made here pertains to the concerted action behind anti-poverty artefacts, which work in combination with each other in producing, or seeking to produce, their intended policy outcomes.

This brings us back to exclusions from the Aadhaar-Based Biometric Authentication System (ABBA). The question is in terms of the politics that systems of the type of ABBA, which subordinate access to biometric authentication, embody within themselves. In other words, what is the politics of the anti-poverty artefacts that ABBA consists of?

Taluk Supply Office, Taliparamba (Kerala), November 2011

Answering this question requires a reminder of the two main types of errors – defined as exclusion and inclusion errors – in which targeted social protection systems can incur. An exclusion error occurs when genuinely entitled users are excluded from the system. Conversely, an inclusion error occurs when users who are not entitled to a system are erroneously included in it. Owners of bogus ration cards, presented by the Justice Wadhwa Committee Report (2010) as a major problem of the PDS across states, enable an inclusion error by affording non-entitled users to illicitly access the system.

The biometrically enabled PDS enforces a very specific policy measure. On the one hand, its action is tailored to combat the inclusion errors that have plagued the system for years. Denial of access to the non-entitled is written into the functioning of the technology: a person without a ration card, or a person which the system does not recognise as a registered user, cannot be made to access the PDS. In its virtue, the technology carries the message that access is to be restricted to validly identified and authenticated PDS users. While extensions can be added into the system, the heart of the technology subordinates access to successful authentication, equating its failure with non-entitlement.

The system’s architecture answers the question on the artefacts’ politics. Designing a technology that bars access to the non-recognised, but takes no provisions for the erroneously excluded, the artefact prioritises the fight to an inclusion error, widely recognised in national policy as responsible for leakage. The history of the PDS gives a strong rationale for such a policy: with the transition to a targeted system in 1997, and the spike in diversion that ensued from it, securely identifying genuine beneficiaries became prioritarian. While such an argument has been questioned on the grounds that exclusion errors were at least equally pressing, the purpose of tailoring the system to fighting wrongful inclusion has shaped the technology into what it is today: a technology subordinating a universal right to successful identification and authentication of users.

Two notes of caution must be made on this argument. First, biometric recognition (and the database of biometric details at the basis of it) is not a fundamental premise of the fight to inclusion errors. Ration cards, which remained starkly non-digital through their lives as artefacts, play the same role, subordinating people’s access to the system to recognition of the person as a valid beneficiary. The fact that in the non-digital PDS recognition was based on the person’s name and photo, rather than their biometric details, does not alter the nature of the technology, which is still aimed at delivering rations only to the person that qualifies as an entitled beneficiary. The Aadhaar-based PDS only inscribes the same politics of physical ration cards into a biometric artefact, planned to be more precise and accurate.

Second, and crucial, the politics of anti-poverty artefacts in the PDS does not need Aadhaar. There are diverse technologies through which biometric authentication can be performed, an early theorisation of the same is found here. In a study conducted in Karnataka in 2014-2015, Amit Prakash and I researched a pre-Aadhaar system that collected biometric details for PDS recipients in Karnataka, matching them with the details collected in the state’s ration card database. Selected ration shops had been provided a PoS machine augmented with a weighing scale, where ration dealers weighed goods associated to each ration cardholder. While antecedent to ABBA, the system played its exact same function and was described to us by its creator, Karnataka’s Food, Civil Supplies and Consumer Affairs Secretary Harsh Gowda, as a major change drive in the state’s fight to corruption.

While exclusions are most surely unintended in the eyes of policymakers, they are directly produced by how the technology is designed. A technology that writes an anti-inclusion error policy into the artefact – rather than a policy that averts exclusion errors – produces the outcomes that people featured in this blog suffer, and that only augmentation of the original artefacts with extra tools can address. Legal injustice, which makes universal right conditional, is the epistemological underpinning of such artefacts.

Not a Dark Side

We read about the “dark side” of digital identity platforms, as if it was “side” effects produced by their design and implementation. But the “dark side” narrative appears flawed, in the light of design injustices that are deeply inscribed into digital ID architectures. In this light it seems appropriate to talk about a “dark matter” of digital ID, rather than just a “side” with incidental effects.

The terminology of “unintended consequences” is widely used with reference to the overarching research stream on digital platforms for development. Acronymised as DP4D, and widely popularised across Information Systems and cognate literatures, the DP4D orthodoxy rests on a much wider logic than that of specific, database-centred digital identity platforms. Illustrated in a recent Special Issue of the Information Systems Journal, the platforms-for-development orthodoxy is articulated across transaction platforms, used to connect the demand and supply for a given product or service, and innovation platforms, used to build a set of complements on a common core. Qualitatively different in nature, transaction and innovation platforms share the generative features that, argues the orthodoxy centred on them, allows customising content to meet the needs of recipients across contexts, including sites of marginalisation where basic needs are unmet.

Openly spelling out that orthodoxy, the Special Issue in point at the same time problematises it. It does not need a focus on digital ID – and on the systemic injustices narrated through this blog’s pages – to note the shortcomings of a platforms-for-development view, especially where existing oppressive logics are inscribed in the design of the platforms. As a core example, digital labour platforms hold the promise of generating new jobs in historically jobless contexts, creating income opportunities and windows of hope for deprived individuals. But at the same time, research on such platforms points to the subalternity inscribed in their technology, which subjects workers to inhuman rating systems and pushes logics of right deprivation that the platform, by design, enforces. The vast body of work on digital labour platforms casts a long shadow on platforms-for-development, a shadow reflected by worker’s narrations of dehumanising experiences lived on a daily basis.

Research on digital identity reflects the same orthodoxy. After all, digital identity platforms present all features of innovation platforms: they have a core constituted by a database storing demographic and, increasingly,biometric data; they rely on boundary resources which enable the construction of complements on the same core, and in virtue of that, they are seen as generative artefacts thatcan fit the needs of particular peoples and countries. But it is the same orthodoxy that crumbles through these pages: we already saw how legal injustice is inflicted on digital ID users, subordinating rights as essential as food, shelter and protection to a well-functioning nexus of authentication and authorisation. We have also seen, however, how legal injustice – and the exclusions stemming from it – is not an unintended consequence of the system, but an integral part of its design. As noted in our work on the politics of anti-poverty artefacts, biometrics are incorporated in them with the precise logic of combating forgery at the expense of the exclusion errors generated by the authentication-authorisation nexus.

Cash-based transactions within biometrisation. Bengaluru, April 2019

It is crucial to see legal injustice, and the way it turns into design injustice through the making of artefacts that incorporate it, as embedded in the features of digital ID systems that subordinate authorisation to successful authentication. In turn, successful authentication of users is predicated onto registration, the first step that allows the registration of user credentials into a central identity database. The point is, for as long as centralised ID systems remain predicated on the conditionality of authorisation to access food, shelter, vital rights associated to citizenship, to authentication processes whose completion is unsure (and in some cases, intrinsically fragile), the logic seeing exclusion errors as the “less evil” as compared to wrongful inclusions will be designed in the artefact itself. And that is not a “side”, inasmuch as platform-for-development literature has to say. It is the matter of the system, which resides in technology design itself.

My doubts on the “dark side” have several ramifications. What I especially hope that my field, Information Systems, treasures of this narrative is the problematisation of a term that has become all too common, all too used, all central to research that wants to illuminate the “unintended consequences” of technologies designed for good. And if none of this was unintended – if, as it is for digital identity platforms, the “side” was effectively designed into the inner “matter” of technology? It is too dangerous, and beyond naïve to imagine that te “side” may be a technical problem solved through fixing some glitches. Research on hunger deaths associated to biometric identification illustrates the dramatic consequences of such a narrative, and the irresponsibility of promoting naïve argumentations around it.

It is a dark matter that we must investigate, not an incidental side. Research on platforms-for-development, whatever angle it takes, cannot and should not bypass this point.

Må identiteten bli digital?

For oss som flyttet til et annet land, og egentlig for meg som bodde i forskjellige land i livet mitt, er identitet et omstridt tema. Livet mitt i Oslo er befolket av mennesker fra mange kulturer, med ulike historier som brakte dem hit. Vår identitet formes i løpet av livene våre, og fortsetter å utvikle seg etter hvert som vi flytter til forskjellige steder, og tilegner oss erfaringer i dem alle.

Jeg har en personlig interesse for å studere identitet. Min forskning er på digitale identitetssystemer, en type systemer som, ved å matche folk til deres rettigheter (for eksempel subsidier eller tjenester), har som mål å garantere et godt liv for alle. Digital identitet er utformet for å komme mennesker til gode, og samtidig gjør den seg skyldig i urettferdighet som tilbys i mange deler av verden. I Norge kan blokkeringer i den viktigste digitale identitetsinfrastrukturen (BankID) blokkere tilgangen til mange grunnleggende tjenester, noe som gjør viktige transaksjoner svært vanskelige for brukerne. Selv om digital identitet er designet for å hjelpe mennesker, resulterer den ofte i nektelse av viktige rettigheter, som min forskning på Indias største matsikkerhetssystem har vist.

(Bilde: Giulio Coppi, December 8, 2021)

Urettferdighet produsert av digital identitet er desto mer alvorlig for mennesker basert i det såkalte globale sør. I utviklingsland spres digitale identitetssystemer som forvandler personen til data raskt. Dette er gjort i navnet til såkalt «utvikling», for å hjelpe utviklingsland med å bygge systemer som kan identifisere mennesker på en sikker måte, slik at de kan motta de tjenestene og den humanitære bistanden de trenger. Men bak dette løftet ligger en mye mer kompleks situasjon. Mange trengende mennesker har blitt ekskludert fra viktige tjenester, av de digitale identitetssystemene som lovet dem bedre tilgang til tjenester. Dette har resultert i en forverring av sultsituasjonen, og har også blitt assosiert med sultdødsfall i den indiske staten Jharkhand.

Andre problemer dukker opp med de dårlige formene for overvåking som digital identitet gir. I juli 2015 ble det europeiske registeret over asylsøkere (Eurodac) gjort interoperabelt med Europol, som er systemet som forbinder politimyndigheter i hele Europa. Det fører til at asylsøkere hvis identitet er digitalisert gjennom biometri, blir automatisk profilert av politiet, noe som gjør det vanskelig for dem å bygge et nytt liv i vertslandet. I tillegg eksisterer utelukkelsene som er muliggjort av nasjonale biometriske systemer, også innen humanitær bistand, med det resultat at flyktninger risikerer ikke å motta varene de trenger.

Men må identitet vår bli digital?

På mange måter er ikke den levde opplevelsen av identitet digital i det hele tatt. Tvert imot er det veldig materiell: identitet utvikler seg gjennom våre livserfaringer, på måter som digitale systemer rett og slett ikke kan fange opp. Digitale systemer kan konvertere oss til data og også produsere urettferdighet, men det blir veldig vanskelig for disse systemene å fange materialiteten og følelsene som identiteten vår inneholder. Hjemmefølelsen jeg har følt meg siden min første dag i Oslo, som føles som hjemmet der jeg er oppvokst, kan ikke fanges opp av noe digitalt system, tror jeg.

Systemer kan konvertere oss til data. Men disse dataene vil aldri male et helt bilde av følelsene som gjennomsyrer vårt levde liv.

Not a Better World

We are nowhere near “making a better world” with digital identity. But research on resistance to it, and especially to the centralised model of digital identification and authentication, offers routes to imagine and build up such a world.

My research field, Information and Communication Technology for Development (ICT4D), has many seminal works to rely on. Dating back to the late 1980s, but with works on computing in low-income countries already published from the 1960s, the field was born with the underlying assumption that ICTs – a novel object, by the time of the field’s foundation – afforded the potential to generate “progress” and prosperity in less wealthy regions of the world. The field’s name effectively includes a finalistic term – it is about ICTs for development, not barely in a context where “development” would be arguably beneficial, at least within the technlology-for-development logic of the early days. The enthusiastic undertones with which the field was born have led ICT4D research to ask, are we making a better world with ICTs?

Thinking of the early days of the field, views on the making of a “better world” were supported by optimistic, but at the same time well-contextualised, stances on what technology could do “for” development. Trying to elicit the unspelled, but core assumptions of the early days of ICT4D, results in at least three statements: first, the idea of “development” now widely contested, in virtue of the colonial undertones its genealogy carried, was born as assumed to be intrinsically good, rather than generating asymmetric benefits and harm on its intended subjects. Second, the idea that ICTs were (at least capable of) being a carrier of such good “development” was dominant, and informed the actions of researchers and practitioners as the new tech-for-development philosophy picked up. Third, the patronising term “developing countries” was taken as good and well usable, with little or no problematisation of its meaning. All of this within a technology-transfer logic where “developing” countries could be, for the good of all, “modernised” through ICTs.

The ICT4D of today, however, looks very different. Thirty-plus years of evolution of the discipline, with many stories of technology transfer and increasingly, of technology being embedded in country politics and citizens’ work, illuminated the extent to which ICTs could “make a better world” for their intended beneficiaries. Stories of digitally induced harm – described by Heeks (2022) as adverse digital incorporation, where technology hurts rather than benefitting its users – contextualised the shift from a logic centred on providing ICTs to the non-connected to one aimed at protecting the connected from the harm that digital connectivity causes on them. From a field centred on “bridging the digital divide”, we got more and more on the way to becoming a field centred on combating the injustices produced, and perpetuated, on already vulnerable people through digitalisation.

Critical research on digital identity belongs to this new, critically informed ICT4D. Or better, it participates in it, illuminating the injustices produced by digital identity systems and the routes to resistance developed on them.

Digital identity research is permeated with stories of harm. To the point that frameworks on the theoretical link between digital identity and human development are used to illustrate the ruptures that such a link meets in practice. To the point that digital identity schemes have been shown, by a recent report by the Centre for Human Rights and Global Justice at New York University, to be linked to human right violations, defying and effectively countering the identity-for-development logic that informs national and supranational digital identity schemes. In its world at the interface among multiple fields, drawing on different theories to make sense of people’s experience of identification, digital identity research is a critical memorandum ow how we are not, in many ways, making a better world with ICTs.

But it is also a memorandum of the opposite. That is, on the shape of the world that can come.

Community health clinic, Malawi, January 2023

Digital identity research is not only about oppression. A large, continuously expanding part of it is about what can be built against systems that reduce people to machine-readable data, which subordinate people’s universal rights to enrolment in a biometric or demographic database. I have just returned from Malawi, where scannable barcodes are being proposed to help Health Surveillance Assistants (HSAs) capable of independently retrieving patient history. Back in India I have listened to the story of Tamil Nadu, a state where smart cards, usable from multiple household members, have been seen to bypass Aadhaar’ biometric recognition. Resistance does not always take the form of protest that it has taken, for instance, in biometric SIM linking in the Philippines recently. Resistance starts from small acts of solidarity, and from technologies that, more or less directly, challenge the centralised model of digital identification and authentication.

So it is not, in good essence, a “better world” that we are making with digital identity. Exclusions and undue surveillance, culminating in human right violations that can be dangerous and even deadly, are very much produced by it. But ICT4D research is not only about injustice. It is also about resistance, at least in equal measure. And it is through that resistance that the much longed “better world” can be made.

Misplaced Research

Today I tell a tale of misplaced research. “Misplaced” in the etymological sense as “conducted in the wrong place”, as a Reviewer once wrote. As it happens, from that perspective my whole research is “misplaced”, as it takes place at the very interface where people meet the technologies governing essential livelihoods. And as it happens, my “misplacement” will keep going.

“Your research is misplaced”, wrote the Reviewer.

(One for fellow academics: believe it or not, this was not Reviewer 2, no. It was the unsuspectable, cosy, otherwise kind and constructive Reviewer 1).

They had a point, and it was an etymological one. In their view, my research was not “misplaced” metaphorically, but in a very physical way. My paper was on the encounter between people and technology in India’s Public Distribution System, specifically in the ration shops where people meet the state through the technology that provides (or denies) their food rations. Here in Malawi, the project I am a part of studies patient records inside the community clinics, where the patient physically encounters the community health worker who deals with their case.

Road to Mangamba, Malawi, January 2023

I can’t hide being deeply inspired by Corbridge et al’s (2005) book on Seeing the State in all my research. As the book argues, “the state” is not experienced at an abstract level. It is lived directly, concretely, in its direct manifestations: it is the policeman patrolling the streets, violently evicting homeless people in absence of a proof of address. It is the government officer checking documents to access a given benefit, the ration dealer enabling, or denying, access to food rations for people. Nothing is abstract in “the State”. Encounters with it, through which images of it are formed, are very real and material, conditioning important aspects of people’s lives.

But the Reviewer expressed a clear judgment. My research is misplaced.

It is so, they wrote, because it does not take place at the point where policy decisions are made. And if you study biometrics in a large food security system, you kind of need to sit at, or at least research, the decision making point, where food policies are shaped. A research that takes place at the last mile, where the user encounters the system in the form of food rations, ration dealers, or community health workers, can only give you a partial, even distorted view of reality. No social protection research, concludes Reviewer 1, should take into account an actor alone. It would have been good to go to Delhi, Reviewer 1 concluded, to see what the echelons of the system effectively say about it.

And now eight years later, I politely say it. Reviewer 1 was wrong.

As social protection researchers, we investigate technologies that are deeply embedded in human lives and livelihoods. The “last mile”, as they called it (and as how important development studies research calls it as well), is completely not “last” when it comes to people’s experience. “Last” as it may be, it is that “mile” that informs people’s contact with service providers: it is here that vital entitlements of food or cash are given, or denied due to authentication failure. It is here that health services are accessed, that medical personnel handles people’s conditions or those of their loved ones. In the last mile, lived experience happens: that of a food programme that delivers rations, a health service that cures residents, a humanitarian programme that assists displaced people. Here at the last mile, the individual interfaces in person with “the technology” on which the bigger echelons have the power to decide on.

True, we may miss something by sitting at the interface. We may lose some action at the upper decisional layers, while spending long days in the field figuring out how people really encounter state-mediating technologies, how they relate to them and experience their ability to guarantee, or deny, access to crucial livelihood generating services. We could move our research to the upper echelons instead, take an owners’ perspective on the platformised systems through which social protection programmes are increasingly provided.

We could, I could. But I don’t, because my research is the interface. In Malawi as in India, as in all places I go, my work is informed by how people live the technology, and by how “the State” becomes real out of its lived human-technology manifestations. Only such encounters, structured by the technologies of rule that govern people’s interaction with providers, give the state a tangible, researchable physical manifestation.

Be it right or wrong, I can’t do it in any other way.

Registration Matters

Purchasing and activating my SIM card in Malawi was a highly structured, computerised process. My conversation with the registering agent reminded me of the importance of understanding registration, and not just authentication, when studying digital identity policies.

Two things struck me about the process of registering a SIM card in Zomba, Malawi. As one of the 155 countries that, as of January 2020, had mandatory SIM registration laws, in Malawi the provision of personal information and a vaild identity document is required for purchasing and activating a SIM card for mobile services. In the awareness of this, and of the problematic consequences of mandatory SIM registration on user data treatment and risk of exclusion from services, two elements of the process experienced here struck me as notable.

First, even for a foreigner whose data are not checked for correspondance to a national ID database, the process is extremely structured. Ahmed, who registered my SIM card in his streetside kiosk in Zomba, shows me every step of it: he first needs my personal details, including a document identification number (the number of my Italian driving licence is recognised as “invalid number”, whereas that of my Italian passport allows us to move to the next step). Only at that point comes the request of a picture of the document, a frequently asked one when it comes to SIM registration. But the fields requested to fulfil the process are rigorous and unallowing for manual entry, where a driving licence number, valid or not, would possibly have fit. A form of control of the document’s conformity to a valid passport is designed in the standards themselves, very different from the paper forms that previous SIM registrations have made me used to.

Airtime, data and SIM registration kiosks, Zomba, Malawi, January 2023

Second, the importance of SIM registration is remarkable. Kiosks of Airtel and Mpamba are all over the streets, and Ahmed tells me about the frustration encountered when, following every step of the procedure for activating a client’s SIM, it ultimately does not work (at the first attempt, even my phone shows an “unknown error” only apparently causing inability to activate the SIM, which a second attempt corrects). Unknown errors, says Ahmed, are unpredictable and can pop up during registration, effectively making SIM activation undoable. The uncertainty connected to activation – reminiscent of that described, among others, by Chaudhuri (2021) on biometric delivery of food rations in India’s Jharkhand – joins a discourse interrogating the consequences of insecurity in practices that, like the ability to access mobile services here, are crucial to basic operations of daily life.

I have just arrived here. A whole new discovery to begin.

But my encounter with Ahmed, and with my new SIM used as hotspot as I write this post, made me think of something that I sometime forget when studying digital ID. A lot of our research – definitely, the focal point of my research since 2014 – is on authentication practices, defined as the process of asserting an identity previously established during identification. Authentication at the point of access – of government services, social protection, humanitarian provisions – is crucial as it determines authorisation or not to receive a given service, which can be as crucial as food rations, social cash or emergency assistance to needful groups. In previous work I have argued that the authentication-authorisation nexus is the crucial bit where legal injustice occurs, and the point whose breakage deprives individuals of essential rights.

And here comes the self critical point. Such an argument risks to exclude registration. Or at least, to neglect the effects generated not by failed authentication with a system, but with the bare inability to register for it. Directly connected to that argument are the consequences of making registration, and not authentication, conditional to identification with a national ID database. Consequences that have shown the risks connected to the SIM-ID link, and the consequent fear among affected users.

With SIM registration protests recently erupted in the Philippines, and increased concerns raised on the use of biometrics for SIM card registry, the primary step of digital identity – the very registration of the individual’s details in a database subjected to different protection policies – reminds me of the importance that all our research, especially that focused on accessing services or having them denied based on the step of authentication, is fundamentally predicated on registration processes that we cannot neglect. And to which I make a note to self, to dedicate more explicit attention in the next projects.

Encountering the State

The ration shop is where the state is physically encountered by users of India’s Public Distribution System, the country’s largest food security programme. In this article I reflect on being in the ration shop, encountering the state in the form of the allowance, or denial, of goods that takes place within this crucial interface.

“Why do you spend so much time in ration shops?”

This is a question I got many times during fieldwork. Since my early, pre-Aadhaar studies of the Public Distribution System, India’s largest food security scheme, my questions have been in terms of what happens when an essential anti-poverty scheme is computerised. That is, when a food security programme that has long been paper-based, centred on transactions occurring through physical documents – called ration cards and enabling users to collect highly subsidised food rations – becomes digital, both in its front and back-end components. One fact is that, at the back end, much was happening way before biometric identification was introduced, across multiple states, in the ration shops where food rations are collected.

In Kerala, where my 2011-2012 fieldwork took place, a software called TETRAPDS (Targeted, Efficient, Transparent Rationing and Allocation Public Distribution System) was  conceived the first decade of the 2000s. Far from being centred on the front-end, where the PDS user encounters the ration dealer, TETRAPDS consisted of three back-end and only one front-end modules. The three back-end modules covered the phases of ration card processing, allocation of goods, and monitoring of ration shop inspections. All these were crucial for the management of the country’s largest food security scheme. More specifically:

– A Ration Card Management System (RCMS) was a workflow-based application where users could apply for a ration card (or for a change in their existing, household-based card, for example when adding a new family member or forming a new household). Upon reception of the online application, RCMS would get it processed by the Collector of Rationing and delivered through the local Taluk Supply Office, the state bureau in charge of ration card delivery. While protagonist of a large backlog in 2011, RCMS was designed for automatising one of the most important processes in the PDS, that in which users were enabled to receive the physical document that enabled them to access food rations.

– An Allocation of Commodities module allowed the Collector of Rationing to ensure correct allocation of goods to ration shops across the state, based on the theoretical requirement. This module was based on Allocation 2.0, an application for the allocation of PDS goods to ration shops across the 14 districts of Kerala. With a cardholder database revealing  the number of households registered with each ration dealer, the allocation module was designed to solve a dilemma that deeply affected the PDS: namely, how to distribute commodities in such a way that all users of the PDS in the state would be served. Years later, the Aadhaar-based PDS would have transformed this function through calculations enabled by biometric point-of-sale machines.

– An Inspection Monitoring System registered the outcomes of controls made by rationing inspectors, officials in charge of checking the regularity of sales conducted in the ration shops. With issues of leakage to non-poor households largely affecting the scheme, keeping a record of the activity of rationing inspectors was important to the state’s programme management. While not continuously implemented across the state, the Inspection Monitoring module revealed the importance of infusing a control component in the system, a control that subsequent, Aadhaar-based versions of the PDS linked to the amount of goods sold monthly by each ration shop to Aadhaar-registered users.

There was indeed a front-end module, called WebPDS. This was a website for the Food and Civil Supply Department to communicate with PDS beneficiaries. While providing information on the scheme, PDS food prices and other relevant points for users, this module was not “front-end” in the sense that it took place in the ration shops where rations are collected. It was so in the sense of providing relevant information to users, information that, at a time preceding widespread mobile governance, was largely accessed through the public-private telecentres diffused through the whole state.

Food supplies, Taliparamba municipality, Kerala, November 2011

But then, with such a large back-end machinery, why spend so long time in ration shops?

The answer lies in how image formation – that is, how users form their own image of the state behind anti-poverty schemes – takes place. Inspiring this view is Corbridge et al.’s (2005) philosophy of seeing the state” through encounters with it, encounters that, far from being abstract, are materialised in the people, bureaus and institutions that represent it. In the Kerala PDS, “the state” is not the abstract entity behind high subsidisation of goods to below-poverty-line users. It is the ration dealer, the material encounter with them, the physicality of the shop where people stand in line to be able to collect their monthly rations. When the biometric PDS came, and with it the exclusions denounced by large parts of the literature, the state remained entrenched in the materiality of encounters enabling or disabling people’s access to vital commodities.

And this is why I sit so long in ration shops. Because it is here, in the materiality of encounters with ration dealers and the machinery coming with PDS transactions, that the state is encountered. Whatever discussion is made of the abstraction behind it, of the subsidy-giving or denying entity that it represents, it is in the materiality of ration shop transactions that the state is met. And for us digital identity researchers, preoccupied with the large infrastructure that precede the state-citizen encounter, it is all too simple to neglect this human dimension: a dimension that is, however, the heart of the state-citizen encounter. And the one in which to sit, for as long as needed, to understand how this encounter is made and shaped by technologies and the politics behind them.

Behind Biometrics

What is the politics of anti-poverty artefacts? In this article I examine it with examples from the early history of the biometric Public Distribution System (PDS) in India, tracing links between today’s Aadhaar-based system and the policy of conditional inclusion that the structural adjustment of the 1990s imposed.

I have recently participated in an extremely enriching event, Digital Urban Infrastructures, organised by colleagues at the University of Twente. A colleague on my same panel asked an important question: what is behind biometrics in anti-poverty programmes?

Based on my research on the datafication of India’s Public Distribution System (PDS), I had just argued that biometric recognition of users inscribes a very clear logic in anti-poverty schemes. It is a logic centred on combating inclusion errors, meaning the erroneous inclusion of non-entitled users in anti-poverty schemes, rather than exclusion errors, meaning the erroneous exclusion of entitled users. Such a logic inspires the biometric transformation of India’s PDS, where Aadhaar-based recognition denies access to non-entitled users, but does not take any action to give access to those who, while entitled, are erroneously excluded from the system. While such a move is friendly to the fiscal burden concerns that have long pertained to the PDS, its consequences are reflected in the perpetuation of stories of user exclusion, told in this blog and in econometric studies of the Aadhaar-based PDS.

But then, what is behind biometrics in anti-poverty schemes, and behind their consequences?

As information systems research has long argued, policy choices are deeply inscribed in technology. In an older piece of research with Amit Prakash, we have sustained this point: anti-poverty artefacts are shaped by the politics that lies behind them, and enact policy decisions that deeply and directly affect their users. In our work, the “politics of anti-poverty artefacts” was illustrated through a pre-Aadhaar case of point-of-sale machines in the state of Karnataka, where a weighing scale connected to speakers announced exactly how much was being sold at each transaction. The fact that we found the point-of-sale machine speakers muted, in most of the ration shops we visited, spoke about ration dealers’ reaction to the policy inscribed in the machines: at the same time, the presence of a paper register, to be used by ration dealers when the point-of-sale machine did not work, revealed the intention to still give rations to those users who were entitled and still not recognised.

PDS transaction through the weighing point of sale machine, Bengaluru, August 2014

The question from the colleague at Twente led me to look back into the anti-poverty policy of the PDS, and note how a policy that prioritises inclusion errors – as opposed to the fight to exclusion ones – finds its origins well before Aadhaar. As early as the 1990s, India suffered a fiscal crisis that turned the PDS, back then a universal food security programme, into a narrowly targeted one, induced by the stringent recommendations imposed by World Bank advisors. Doing away with a universal policy, with the one exception of the Tamil Nadu state, resulted in the distinction of subsidy between Below-Poverty-Line (BPL) residents and Above-Poverty-Line (APL) ones, for whom only a meagre subsidy, approaching the market price, remained into place. The institution in 2000 of Antyodaya Anna Yojana (AAY), involving larger quantities of subsidy for the poorest of the poor, complemented a policy that effectively subjected remaining in the PDS to proven poverty status.

This policy, at the same time, caused severe harm to users – with the exclusion of needy households, classified as non-poor – and to ration dealers, who started having to run their ration shops with a suddenly and massively reduced customer basis. It is the same policy that, in the state of Kerala alone, reduced PDS offtake – meaning, the combined quantity of goods collected from ration shops in the state – from 4.64 tonnes to 1.71 in 1997-2001 alone. And it is the same policy that originated the shrink in PDS ration dealers’ customer basis, leading to the wave of ration dealer suicides whose memory was still very vivid during the fieldwork I started in Kerala in 2010. What the Aadhaar system does today, subordinating access to the PDS to the successful recognition of users, is effectively crystallising the same policy of prioritisation of inclusion errors that the structural adjustment generated in the 1990s, reproducing, though with changed times, the same dynamics of user exclusion and ration dealer blaming for diversion that structural adjustment hinted to.

As Langdon Winner argued as early as 1986, artefacts have politics. And anti-poverty artefacts have politics that if not carefully tailored, and if oblivious of user needs, may severely harm users and participants in the making of anti-poverty schemes. In looking at what is behind biometrics in anti-poverty programmes, we needs to look at the policy choices inscribed in them, and on the sheer effects that these have on people.

ID Justice as Fairness?

Inspired from Chapter 8 of the new book “Data Justice”, I read Rawls’ (1971) work on “Justice and Fairness” in the light of digital ID, and draw on it to imagine what a “fair ID” may look like.

Our research on the incorporation of Aadhaar, the world’s largest digital identity platform, into India’s Public Distribution System has revealed three forms of data injustice over users. A legal injustice occurs when fundamental rights, such as the right to food, become conditional to successful authentication with digital identity systems, authentication whose denial results in exclusion from essential services. An informational injustice is produced when information on how digital identity data are used is hidden from users, or worse, when users are placed in a condition of inability to enquire on such information. Finally, design-related injustice is perpetrated directly through technology design, and harms users through the technical features of digital identity systems.

But if these injustices play out in the digital identity world, what does it take to achieve forms of “fair ID” that overcome unjust practices on users?

Food supplies, Taliparamba municipality, Kerala, November 2011

Reading Chapter 8 of the new book “Data Justice” by Lina Dencik, Arne Hintz, Joanna Redden and Emiliano Trere’ inspired me to go back to John Rawls’ (1971) essay on “Justice as Fairness”, where the relation between the two concepts is problematised. While notions of “justice” and “fairness” are often used as synonyms, such an interchangeability of the two terms is mistaken, Rawls argues. He notes that “the fundamental idea in the concept of justice is fairness”, which makes fairness a fundamental idea on which the notion of justice is built. In Rawls’ view, the one concept constitutes a fundamental condition for the other to unfold.

Rawls’ essay goes into greater detail of the notion of justice. Having stated the “elimination of arbitrary distinctions” as fundamental to justice, he develops a conception of justice articulated on two principles: “first, each person participating in a practice, or affected by it, has an equal right to the most extensive liberty compatible with a like liberty for all; and second, inequalities are arbitrary unless it is reasonable to expect that they will work out for everyone’s advantage, and provided the positions and offices to which they attach, or from which they may be gained, are open to all.”

In my read of Rawls’ work, both principles act as a guiding light in the making of fair ID. The first one, centred on “an equal right to the most extensive liberty” is most clearly seen in the light of its negation, and of the consequences it has on users. Told earlier on this blog, the story of Aisha – whose long wait for a ration card denies her access to food rations – illuminates the consequences of negation of the fundamental liberty to avail the right to food. There are two dimensions to the injustice that Aisha suffers: a relative one, viewing her case in comparison with that of users that, equally entitled as her, can access food rations, and an absolute one, resolved in the bare denial of the right to food to a below-poverty-line woman and her household. Both dimensions impinge on the systemic denial of a condition of impartiality, which is a fundamental trait of fairness.

The second principle, for which “inequalities are arbitrary unless it is reasonable to expect that they will work out for everyone’s advantage”, is again reflected in the inequalities reinforced by digital identity systems within existing programmes. In the case of Aadhaar-mediated access to India’s food security system, systematic inequalities are produced between Ankita, whose right to food is subordinated to capture and usage of her Aadhaar credentials, and the households queueing together at ration shops in the hope that at least one member will be able to authenticate, and hence collect the ration. Hope that, if unfulfilled, will leave the household without the rations through which their right to food is substantiated. And hope that is denied in principle to persons whose bodies who are unreadable by biometric technologies, and for whom the idea of an authorisation-authentication nexus is broken at the basis by the impossibility of authentication.

Following Rawls, fairness is not justice. But it is a fundamental condition for it, the conditio sine qua non for the equality of rights and abolition of arbitrary inequality on which injustice is predicated. How can these principles be infused in digital ID architectures?

As highlighted here, the importance of ID fairness becomes most notable when facing the consequences of its denial. The stories of Ankita, Adeela, Aisha, and the users of India’s food security system narrated in this blog are painful illustrations of such consequences. And at the same time, stories of fair ID emerge: reading on Kenya’s civil rights organisation Haki na Sheria, which is conducting mobile birth registration to ensure citizenship rights across the country’s population, leads to visualise the equality envisaged by Rawls into practical acts of production of digital ID. It is on such practices of fairness, and on the possibility of embodying them through digital identity architecture, that more light needs to be casted by research.

Rethinking Design-Related Injustice

A “dark side” logic dominates the discourse on design-related injustice in digital identity systems. Here I dispute this view, noting how it is the very substance of biometric social protection, rather than a peripheral side, to be harmful for its users.

When Soumyo and I first introduced the concept of design-related injustice, in relation to the use of Aadhaar within the Public Distribution System (PDS) in Karnataka, we had in mind a more circumscribed notion than what came next. We had theorised design-related injustice as the injustice resulting from misalignment of technology with user needs. Our theorisation came from witnessing entire families queuing up in the ration shops, in the hope that at least one household member would be able to authenticate through the biometric recognition of the Aadhaar system. While combating erroneous inclusion, i.e. the provision of rations to non-entitled users, the system did nothing against erroneous exclusion, i.e. the exclusion of genuinely entitled beneficiaries, a problem that the introduction of Aadhaar in the PDS actually magnified.

The families queuing together at the ration shops, as well as the frustration of those users for whom authentication did not work, resulted in our first idea of design-related injustice. Its basis was the concept of design-reality gaps defined, with Richard Heeks, as gaps between users’ reality and the world of technology designers, whose assumptions may be very different from the reality lived by users. A common design-reality gap emerges when the two worlds are markedly distant from each other: that is the case for the Aadhaar-based PDS, whose designers make the technology capable, at least on paper, to fight erroneous inclusions. But they do not fulfil the main need of users, that of combating exclusion errors that leave people in hunger, hence generating the misalignment that our original notion of design-related injustice spoke about.

Queuing outside the ration shop. Bengaluru, Karnataka, April 2018

The concept, however, evolved significantly over time. Sasha Costanza-Chock’s amazing book “Design Justice: Community-Led Practices to Build the Worlds We Need” has marked a cornerstone in that evolution, theorising how injustice can be directly embedded into technology design. Opened by the example of airport scanners and the injustice performed through them on transgender bodies, Costanza-Chock’s book has inspired our rethinking of the injustice performed through biometric technologies on people accessing food rations. Such a rethinking is along the lines of a more thorough notion of design injustice: one of which misalignment with user needs, initially central for us, is a component of a wider ensemble. An ensemble in which injustice is the substance, rather than just a “dark side”, of systems that regulate access to social protection for millions of people globally.

At least two considerations inspire this thought. First, technologies like the Aadhaar-based PDS embody a nexus – referred to as the authentication-authorisation nexus – which subordinates authorisation to essential services to the correct authentication of users. This is made to combat the leakage rates that diminish the effectiveness of large anti-poverty programmes, including India’s PDS. Biometric authentication of users is made to ensure they are genuinely entitled: but at the same time, it does nothing to combat the harm suffered by those for whom authentication doesn’t work. My older Karnataka fieldwork revealed that authentication failure is not only due to the misreading of bodies, but also to hidden issues – such as failed connectivity of point-of-sale-machines to the central ID database – which result into the outright injustice of denial of food rations to users.

Secondly, design-related injustice is reproduced across systems. Before switching to Aadhaar-based authentication, the Karnataka PDS adopted an independent system – based of weighing scales, in turn connected to biometric point of sale machines – that presented similar issues to the Aadhaar-based PDS. In the older system, machines would announce the food quantity weighed through a speaker: that speaker however was often muted, as we found in our work across ration shops in 2014-2015. Most importantly, that system also excluded non-recognised users from food provision: but a backup system made it possible to sell rations outside it, based on manual verification by the ration dealer. In the Aadhaar-based system instead, while the injustice of exclusion is reproduced, no backup system is available, which may contextualise the hunger deaths written on by the Hindu for excluded users in the Jharkhand state.

The embeddedness of injustice in the authorisation-authentication nexus, and its reproduction across different versions of biometric authentication systems, leads to a fundamental rethinking of the notion of design-related injustice that we had originally thought. We had theorised a bare misalignment with user needs, thinking the problem was with the gap between the designers’ world and the need of users to access the rations which builds up their livelihood. But field stories show that injustice operates at a much deeper level, which can be seen as a form of injustice that is directly perpetrated through technology design. The evolution of design-related injustice can inspire, we hope, data justice research on digital identity beyond the case of India’s food rationing system.