top of page

Being Human in the Age of AI

In partnership with the AI & Humanity Lab, the Institute of Philosophy presents a thought-provoking event where academia and industry converge to explore the intricate relationship between humanity and artificial intelligence. From insightful talks by leading scholars to engaging discussions by industry experts, this event delves into the complexities of our evolving digital landscape and its impact on what it means to be human. Discover how AI is reshaping society, ethics, and the very essence of human identity.

Senate House

WC1E 7HU

Woburn Suite

Speakers
  • Lasana Harris (Professor of Social Neuroscience, UCL)

  • Geoff Keeling (Google Research)

  • Joanna Bryson (Professor of Ethics and Technology, Hertie School)

  • Keynote, Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, Edinburgh Future Institute)

  • Rafael Calvo (Chair in Engineering Design, Imperial College)

  • Eve Hayes De Kalaf (Research Fellow, Institute of Historical Research, School of Advanced Study)

  • Will Fleisher (Georgetown University - Center for Digital Ethics)

  • Keynote, Matt Clifford (co-founder of Entrepreneur First and Chair of ARIA)

May 23

Welcome
09:00 - 09:30  AM

Coffee & tea.

09:30 - 11:00  AM

People rely on basic perceptual processes and mental state inferences when perceiving other people. But what psychological processes govern the perception of AI? This presentation describes the literature establishing brain correlates of AI presence: knowing that one is interacting with an AI. It then describes dimensions of AI perception, comparing them to dimensions of person perception and mind perception. Finally, it describes behavioural results demonstrating when people utilise AI across a variety of objective and subjective contexts. As such, it attempts to illustrate a broad model of AI presence and anthropomorphism based on humans’ current perceptions of AI.

AI Presence and Anthropomorphism: The Perception of AI as Human
The Ethics of Advanced AI Assistants
11:10 - 12:40 PM

The development of general-purpose foundation models such as Gemini and GPT-4 has paved the way for increasingly advanced AI assistants.

While early assistant technologies, such as Amazon’s Alexa or Apple’s Siri, used narrow AI to identify and respond to speaker commands, more advanced AI assistants demonstrate greater generality, autonomy and scope of application. They also possess novel capabilities such as summarisation, idea generation, planning, memory and tool-use – skills that will likely develop further as the underlying technology continues to improve.

 

Advanced AI assistants could be used for a range of productive purposes, including as creative partners, research assistants, educational tutors, digital counselors or life planners. However, they could also have a profound effect on society, fundamentally reshaping the way people relate to AI. The development and deployment of advanced assistants therefore requires careful evaluation and foresight. In particular we may want ask:

 

  • What might a world populated by advanced AI assistants look like? 

  • How will people relate to new, more capable, forms of AI that have human-like traits and with which they’re able to converse fluently?

  • How might these dynamics play out at a societal level – in a world with millions of AI assistants interacting with one another on their user’s behalf?

 

This talk will explore a range of ethical and societal questions that arise in the context of assistants, including value alignment and safety, anthropomorphism and human relationships with AI, and questions about collective action, equity and overall societal impact.

Lunch
12:40 - 02:00 PM

Provided for speakers at the Institute of Philosophy.

Human Centring Is Not a Choice
02:00 -03:30 PM

Both Justice and Ethics more widely require a system (society) of agencies that are sufficiently peer to each other that they can collectively enforce responsibilities on an any member of that society. Humans have no choice but to centre our justice and morality on the actions of human agencies. What this implies (among other things) is that large complex entities must ensure that they provide a meaningful interface between the processes that construct their products or services, and the agencies that ensure their good behaviour and self improvement. Human comprehension can never be merely a facade. Rather, honest communication sufficient to ensure accountability and improvement is an essential survival strategy. In this sense, AI regulation is now going through a process that in some sense recapitulates the evolution of language in human societies.

Humanity in the AI Mirror (Keynote)
03:45 - 05:45 PM

Artificial intelligence (AI) technologies are increasingly used to tell us who we are, what we want, what we value, what we can do, and what we will create. Yet as useful as these tools are, their predictions tell a very incomplete story of what it is to be human. These reflections of our historical data offer us infinite versions of the choices we have already made, not the new paths that we have yet to walk. Today the human family faces a stark choice: between allowing our algorithmic mirrors to trap us in the unsustainable patterns of the past, or using them to open still untraveled roads to humane futures.

Dinner
06:30 PM

Provided for speakers at the Institute of Philosophy.

May 24

Welcome
09:00 - 09:30  AM

Coffee & tea.

Human Autonomy in and AI World
09:30 - 11:00  AM

Human autonomy is a pillar of contemporary ethics and politics, particularly in liberal democracies like the UK, as well as in biomedical ethics. Psychological research robustly shows that a personal sense of autonomy is essential to wellbeing and sustained motivation. Such felt autonomy also underpins user adoption, engagement, and satisfaction in (digital). But today, autonomy is coming under new threat by AI-driven technologies. Meanwhile, current AI research and policy, questions of safety, fairness, or explainability have received far more attention than how AI may impact autonomy – let alone how to design AI in an autonomy-supporting fashion. In this seminar I will describe a vision of a socio-technical future where evidence-based and legitimate design and regulatory guidelines ensure that algorithmic environments safeguard and support human autonomy.

Digital Identity: Global Trends, Emerging Threats and Human Rights Agenda
11:10 - 12:40 PM

Digital identity – i.e. the ways in which the identification, authentication and authorisation of individuals is performed digitally – has become a central governance tool to facilitate access to government services, welfare, development aid and finance. Of considerable global import, by 2030 governments are aiming to ensure that over one billion people around the world are provided with their own unique identity. Although typically touted as the solution to overcoming social and financial exclusion, the integral role digital identity is now playing in advancing the global development agenda is leading to significant human rights protection risks. The aim of this talk therefore is to examine some of these dangers, to deepen reflections and to discuss the opportunities currently emerging from the widespread implementation of identification technologies.

Lunch
12:40 - 02:00 PM

Provided for speakers at the Institute of Philosophy.

Why Care About Understanding AI?
02:00 - 03:30 PM

The most sophisticated AI tools use models that are deeply opaque. They are so large, so complex, and are trained using so much data, that not even their developers fully understand why they function the way they do. AI models are even more opaque to the general public, as much of the knowledge of how AI is developed is kept secret by its developers. And even when the models are open source, this does not aid the comprehension of those without advanced training. This opacity has raised concerns about the use of complex AI tools in a democratic society. Transparency is a requirement for democratic legitimacy. Moreover, some have argued that people have a right to an explanation for how they are treated. I think there is something fundamentally right about these concerns: we often do need to understand a tool before it is permissible to use it. However, explaining why AI opacity is a problem for protecting legitimacy and the right to explanation is more complicated than it seems. There is a great deal of opacity present in our understanding of existing, non-AI technologies. For instance, drugs are commonly prescribed even without understanding their mechanism of action. Moreover, our governments currently operate with a great deal of opacity, and in some cases this opacity does not seem problematic. If we are to ground the importance of understanding AI, we need a better explanation of what that understanding does for us, and why we should care.

Break
03:30 - 03:45 PM

Coffee & tea.

Beyond AI Safety: Maximising Human Agency in the Age of AI (keynote)
03:45 - 05:45 PM

The world made progress in 2023 towards mitigating risks that AI might create or amplify in traditional national security threat models. I argue that this is important, but insufficient. AI safety as a field must move beyond addressing downside risks and lay out a positive, popular vision of an AI future. I suggest such a vision should have the maximisation of human agency as its centrepiece. To this end, I explore some technical, policy and governance approaches to AI that have the potential to centre human value, preferences and decision making.

About the Speakers

Will Fleisher

Georgetown University

I  am an Assistant Professor of Philosophy at Georgetown University. I am  also a Research Professor in Georgetown's Center for Digital Ethics.


My  areas of specialization are in the ethics of AI and in epistemology.

My  research concerns the ethical, political, and epistemic implications of  contemporary and near-term AI systems, particularly those developed  using machine learning techniques. More specifically, I have written  about algorithmic fairness and explainable AI. I also maintain a  research program in the epistemology of inquiry. My work has been  published in AAAI/ACM conference proceedings and in leading philosophy  journals, including Noûs, Philosophical Studies, and Philosophy of  Science.

Eve Hayes De Kalaf

Research Fellow, Institute of Historical Research, School of Advanced Study

Dr Eve Hayes de Kalaf was, until recently, a Research Fellow on the AHRC-funded project The Windrush Scandal in a Transnational and Commonwealth Context based in History and Policy at the Institute of Historical Research. In 2018, Eve completed her PhD at the University of Aberdeen while teaching on courses in Sociology, Social Policy, Politics and International Relations at the School of Social and Political Science, University of Edinburgh. Recent public engagement events include leading a roundtable at the World Conference on Statelessness, Kuala Lumpur on digital exclusion (February 2024), an invitation to speak to British government officials on at the Westminster Forum Projects policy conference ‘Next steps for Digital Identities in the UK’ (March 2024) and a presentation at The British International Studies Association on the panel Citizenship: A Barrier to Rights and Inclusion (June 2023). Dr Hayes de Kalaf’s contributions were also included in the Department for Science, Innovation and Technology’s findings report ‘Public dialogue on trust in digital identity services’ (February 2024). Eve is particularly interested in how states are documenting and identifying populations and the impact of social policy practices on questions of race, citizenship and belonging.

Shannon Vallor

Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, Edinburgh Future Institute

Professor  Shannon Vallor holds the Baillie Gifford Chair in the Ethics of Data  and Artificial Intelligence in the University of Edinburgh’s Department  of Philosophy. She is Director of the Centre for Technomoral Futures in  the Edinburgh Futures Institute, and co-Director of the UKRI BRAID  (Bridging Responsible AI Divides) programme. Professor Vallor's research  explores the ethical challenges and opportunities posed by new uses of  data and AI, and how these technologies reshape human moral and  intellectual character. She is a former AI Ethicist at Google, and  advises numerous academic, government and industry bodies on the ethical  design and use of AI. She is the author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016), and The AI Mirror (Oxford University Press, 2024).

Geoff Keeling

Google Research

Geoff Keeling is a Senior Research Scientist at Google Research and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. His research covers a range of disputes in the ethics of artificial intelligence including algorithmic bias, digital manipulation, and the ethics of autonomous vehicle decision-making. Prior to Google, Geoff was a Research Fellow at Stanford University, based between the McCoy Family Center for Ethics in Society and the Institute for Human-Centered AI.

Matt Clifford

co-founder of Entrepreneur First and Chair of ARIA

I’m co-founder and CEO of Entrepreneur First (EF), the world’s leading talent investor, and Chair of ARIA, the UK’s Advanced Research and Invention Agency. In 2023, I led the preparations and negotiations for the AI Safety Summit at Bletchley Park as the UK Prime Minister’s Representative (on sabbatical from EF). Previously, I led the design work for the UK Frontier AI Taskforce (now the UK AI Safety Institute, where I am Vice Chair of the Advisory Board). I serve on the boards of Code First Girls, which I co-founded (also with Alice), the Kennedy Memorial Trust and Innovate UK. To my great surprise, I was awarded an MBE for services to business in the Queen’s Birthday Honours in 2016.

Rafael Calvo

Chair in Engineering Design, Imperial College

Rafael A. Calvo, PhD is Professor at the Dyson School of Design Engineering, Imperial College London. He is also co-lead at the Leverhulme Centre for the Future of Intelligence (Imperial spoke). He focuses on the design of systems that support  wellbeing in areas of mental health, medicine and education, and on the  ethical challenges raised by new technologies.

Joanna Bryson

Professor of Ethics and Technology, Hertie School

Joanna Bryson is Professor of Ethics and Technology at  the Hertie School. Her research focuses on the impact of technology on  human cooperation, and AI/ICT governance. From 2002-19 she was on the  Computer Science faculty at the University of Bath. She has also been  affiliated with the Department of Psychology at Harvard University, the  Department of Anthropology at the University of Oxford, the School of  Social Sciences at the University of Mannheim and the Princeton Center  for Information Technology Policy. During her PhD she observed the  confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she  co-authored the first national-level AI ethics policy, the UK's  Principles of Robotics. She holds degrees in psychology and artificial  intelligence from the University of Chicago (BA), the University of  Edinburgh (MSc and MPhil), and Massachusetts Institute of Technology  (PhD). Since July 2020, Prof. Bryson has been one of nine experts  nominated by Germany to the Global Partnership for Artificial  Intelligence.

Lasana Harris

Professor of Social Neuroscience, UCL

Prof. Harris is appointed in the Experimental Psychology Department in Psychology and Language Sciences at University College London (UCL), where is the Vice Dean for Global Engagement in the Faculty of Brain Sciences (FBS). He is also a council member at the United Kingdom Research and Innovation’s (UKRI) Economic and Social Research Council (ESRC). Prof. Harris is a social neuroscientist who takes an interdisciplinary approach to understanding human behaviour. His research explores the brain and physiological correlates of person perception, social learning, emotions, social inferences, prejudice, dehumanization, anthropomorphism, punishment, and decision-making. His research addresses questions such as: How do we see people as less than human, and non-human objects as human beings? How do we modulate affective responses to people? How do we make social, legal, ethical, and economic decisions?

bottom of page