How Should UN Agencies Respond to AI and Big Data?

Jolene Yiqiao Kong*, former intern at UNAIDS and MA student at the Graduate Institute of International and Development Studies
Richard Burzynski* , Senior Advisor, UNAIDS
Cynthia Weber, Professor of International Relations, University of Sussex

Synopsis: The interaction of three forces – organizational missions, new technologies, and political narratives – will shape the UN system’s approach to AI.

Keywords: artificial intelligence (AI), big data, human rights, United Nations (UN)

Three forces are shaping United Nations (UN) approaches to Artificial Intelligence (AI) and big data:  the broad mission of the UN and the specific mission of each UN agency; the rapid emergence of new technologies; and the political narratives that frame AI and big data.  By analyzing how these three forces combine, align, contradict and potentially undermine themselves and one another, UN agencies can develop guidelines and strategies to determine which (if any) AI and big data technologies serve their specific missions.

What does this look like in practice?  We illustrate this by considering how these three forces might shape a UNAIDS response to HIV/AIDS.

The Three Forces and the UNAIDS HIV Response

Three forces are intertwined in the UNAIDS HIV response: the UNAIDS mission to respond to HIV effectively, technological advances and their availability, and political narratives that frame AI and its uptake.

The first force is the target of UNAIDS to achieve zero new HIV infections, zero discrimination and zero AIDS-related deaths. UNAIDS also aims to speak out with, and for, the people most affected by HIV in defense of human dignity, human rights, and gender equality, in line with the overall UN mission.

The second force is the rapid emergence of new technologies in AI and big data to assist HIV diagnosis, treatment and prevention.  For example, a prototype home testing device that attaches to a smart phone can detect HIV status in 15 minutes.  While this device promises at least as accurate an HIV diagnosis as currently available HIV home tests, its connection to a smartphone raises issues around informed consent, privacy, and data storage.  Does this hi-tech test actually benefit people living with HIV more than existing low-tech tests, or does it put them at greater risk than off-line testing because of how their data might be used, including their HIV status?  This is a question UNAIDS must consider as it assesses the potential use of AI and big data in its HIV response.

The third force concerns the political narratives and framing of AI and big data, and their impact on the HIV response.  Three AI narratives dominate contemporary discussions: the dystopian account of AI driven by fear, the ethical account of AI driven by hope, and the entrepreneurial account of AI driven by the desire for freedom from both state regulation and individuals’ full and sustained ownership and control of their personal data. These three accounts compete and combine at different levels of strategic planning and policy-making in the UN, affecting how UNAIDS positions itself and pitches the idea of using AI to stakeholders to end the HIV/AIDS epidemic as a public health threat in its advice to governments.

For example, a 2018 “Artificial Intelligence for Health” workshop organized by the International Telecommunications Union (ITU) and the World Health Organization (WTO) was framed around an ethical narrative driven by the hope that AI would be made safe for the greater human good, and help international organizations, governments and civil society to achieve the Sustainable Development Goals (SDGs) and a better life for all. Yet some participants were more aligned with an entrepreneurial freedom narrative, which often puts profiting from users’ data over user needs and protections, putting users at greater risk.  Because UNAIDS wishes to protect people seeking HIV prevention, care and treatment services, UNAIDS needs to be aware of how some AI narratives might compromise its objective.  This may require UNAIDS to expand its understanding of what it means to protect humans to include protecting data about humans.

How these three forces combine around specific UNAIDS policies, new technologies, and political narratives is different in every case.  By keeping these three forces in mind, UNAIDS staff will be better equipped to access the benefits and risks of emerging technologies.  This will better empower them to uphold the UNAIDS mission in ways that preserve the dignity, security, and human rights of people living with HIV.

How Should other UN Agencies Respond to AI and Big Data?

Analysis of how the three forces combine around specific missions, technologies, and political narratives is vital for any UN agency.  In this context, we offer three additional recommendations:

  1. The UN commitment to a human-centered and rights-based approach should guide UN policy into the 21st To do so, UN agencies must be aware of how AI and big data can undermine privacy and informed consent as well as cause unfair, biased and discriminative outcomes through opaque processes of AI-driven identification, profiling and automated decision-making.
  2. All UN agencies should debate and discuss these issues, both internally and externally, to push for new policies and regulatory measures that are guided by the overall UN mission and by the agency’s specific mission. UN agencies need to establish their own policies which ensure that all decision-making within their agency remains centred on human rights and civil liberties in this new era.
  3. In a UN context of hope that often emphasizes the benefits of “AI for Good” to achieve SDGs, UN agencies should acknowledge and address the risks of AI and big data for their missions that follow from often-overlooked or de-emphasized fear and freedom narratives, which may endanger the human rights and civil liberties of the key populations each UN agency serves. That can’t be solved by technological standardization alone.

Where does this leave UN Agencies?

AI and big data promise to revolutionize healthcare around the globe.  That revolution might mean harnessing ‘AI for Good’ and help the UN to achieve its SDGs.  But each specific application of AI and big data carries its own specific risks as well.  UN agencies need to consider the tradeoffs between: the promised benefits and potential risks of each specific new technology they seek to use or recommend; that technology’s role in the policy objectives the agency hopes to achieve, and; what the agency can do and should do to limit potential violations of human rights and civil liberties if it were to employ or recommend a specific AI and big data technology.  Crucially, UN agencies need to realize that risks posed by AI and big data to the key populations each agency serves could become risks to their missions and to the UN mission more broadly. Attention to how the three forces combine, align, contradict and potentially undermine themselves and one another will help UN agencies achieve these aims.

*This piece is a personal opinion of the authors and does not represent the official views or position of UNAIDS.

This post has been published in collaboration with the United Nations University’s Centre for Policy Research.

3 Replies to “How Should UN Agencies Respond to AI and Big Data?”

  1. The Annual Meeting of Special Procedures mandate holders of the UN Human Rights Council this year discussed, inter alia, issues related to artificial intelligence. On occasion of their exchange with civil society, Autistic Minority International expressed our concerns about a recent initiative by the World Health Organization and the International Telecommunication Union, namely a joint Focus Group on Artificial Intelligence for Health, whose autism-related activities are rooted firmly in the outdated medical model of disability and the pathologization of autism. Despite repeated requests, we have not been added to their mailing list, and there is no involvement of or oversight by civil society, even though the researchers and tech companies involved are using large amounts of patient data, from a wide range of physical and mental health conditions, including highly sensitive brain scans of autistic persons, for their work developing global benchmarks and standards for AI in health.

    We notice persistent bias and prejudice against autistic people in artificial intelligence more broadly, where more and more new technologies are aimed not at removing barriers that prevent our full and equal participation in society, but instead seek to change us and modify the behaviours of autistic children, disrespecting our autistic identity. Even something as potentially beneficial as glasses that would help autistic people recognize others’ facial expressions and emotions are now being (ab)used to train children to hold eye contact, which many of us experience as painful or distressing. AI voice-controlled assistants discriminate against autistic people who are non-verbal, use augmentative and alternative communication devices, or have unusual speech patterns. AI security cameras may interpret the way autistic people walk as “suspicious”, and autistic job applicants may be rejected by AI recruitment tools because the underlying algorithms are trained on how neurotypical people behave. Robots as “companions” for autistic children suggest to them that they are so “other” that they cannot interact with humans and, again, they are mostly used as tools for modifying autistic children’s behaviour in line with the ableist preferences of non-autistic parents and “experts”.

    As actually autistic persons, we view autism not as a disorder or disease to be prevented, cured, or eradicated, but as a lifelong neurological difference, both genetic and hereditary, that is equally valid. As autistic self-advocates, we seek to promote autism acceptance and oppose the false narratives and negative stereotypes perpetuated by organizations of misguided parents of autistic children that may perceive us as burdens, governments and charities run by non-autistic persons that frame autism as a global epidemic, and so-called autism “experts” that recommend the subjection of autistic children to behaviour modification and “normalization” or would institutionalize us altogether. The autistic minority comprises an estimated seventy million people on the autism spectrum, one percent of the world’s population.

    Actually autistic adults must be involved and consulted in research (including genetic research into the causes of autism that increasingly utilizes AI) and the design and development of AI applications. It is not enough to have input and consent from non-autistic parents, if at all. We called on the Special Procedures to monitor, assess, and mitigate the risks of human rights violations in AI for persons with disabilities and others with physical and mental health conditions, including by taking an interest in related activities undertaken at the level of the UN, and safeguard and ensure the rights of autistic persons in particular. We called for recognition of the worldwide autistic community as a minority in need of protection and the realization that all forms of compliance-based behaviour modification therapies for autism, such as Applied Behaviour Analysis (ABA), which many autistic people consider akin to torture, are equivalent to gay conversation therapy and equally unjustified and abusive. We all must overcome bias and learn to value difference. The UN, and in particular the WHO, must listen to what autistic people themselves tell them.

    Erich Kofmel, President
    Autistic Minority International

Leave a Reply