Two New Principles for Humanising AI

AI has the power to free up human time and improve lives. Yet it can only do so if it is designed ethically and inclusively. There’s a significant risk that AI will primarily extract from, rather than serve, ordinary people. Just as with the industrial and digital revolutions before, people may become further alienated from the world around them. We want to forestall this, without adding drag to the full and remarkable potential for social impact from technology.
We propose two new principles to drive AI development toward promoting dignity and human connection, ensuring that users - students, patients, citizens - are treated with respect for their humanity. In this fast-moving field we hope these will guide IDinsight’s own work, and we welcome feedback on how to improve them further.
Inclusive Data, Inclusive Design
For AI to effectively serve people, it must be built on appropriate datasets (DataDelta’s strength) and shaped to prioritize human dignity (the expertise of our Dignity Initiative). Many frameworks offer minimum standards, which we already aim to meet. This is about how — without requiring teams to find budget and time for an entire parallel evaluation project — we can do more. We will focus on practical advice from real case studies, moving beyond responsible AI toward enhanced user experience.
To address these challenges, IDinsight has already committed to two core principles in our AI-focused work:
- Data Inclusion: Pioneered by DataDelta, this ensures AI is built on appropriate datasets that represent all users, based on fair data usage agreements.
- Design Inclusion: Incorporating user voices and design thinking approaches early in product development, led by client-facing teams.
Two New Principles
To these, the Dignity Initiative proposes to add two more:
Principle 1: Non-Displacement of Human Connection
When a citizen needs help with a service, it is valuable for them to be met by a well-trained, sympathetic human. Yet it is expensive and hard to scale. When human interaction currently exists in a process, we will not remove it unless users clearly express preference or where there is a clear welfare enhancement to users to do so.
We will generally support augmenting and complementing that interaction, by equipping staff members with additional AI tools, as we are doing with Community Health Workers in Ethiopia together with Last Mile Health. We can also provide an additional, non-displacing channel of information, which many users found valuable in our research with Indian farmers together with Digital Green.
Where human interaction does not exist, because of constrained funding and the inherent challenges of scale, but preferential care standards suggest that such human connection ought to exist, we will create pathways toward such interactions being added later — for example, through automatic generation of case histories that professionals can review. Only in cases where existing human interactions are harmful (such as with abusive security forces), we will support their replacement with more dignified alternatives, including purely technological ones.
Why Focus on Human Connection?
When people have interactions that respect their dignity, they value it, and positive individual, programmatic and social consequences follow. By directing the benefits of AI toward enhancing rather than replacing valuable human interaction, we can make meaningful connections a plentiful activity, like craftsmanship in a post-industrial world.
For example, AI could streamline administrative tasks for social workers, freeing up their time to focus on building trust and understanding with clients — to the benefit of those clients (and perhaps to the social workers too).
Rather than merely avoiding harm, we aim to promote beneficence and to reduce existing inhumane interactions (such as discriminatory behavior by service providing staff towards marginalised groups) while enabling meaningful human connection where that is valuable. This creates a gradual slope toward more human interaction. This is most important in the public sector and social impact contexts, where we are contributing towards core social infrastructure.
Principle 2: Dignity Affirmation
We will analyse the experience of all users of services that our AI products are supporting, on the basis of whether the service, and the addition of AI, promotes recognition, agency and equality.
Where there is a different group (e.g. staff) directly using the AI product, we will also look at whether their experience of using the product is dignity affirming. We will consistently advocate to include content focused on enhancing people’s experience of respect for their dignity through evidence-based messaging and exercises of the sort being developed and tested by our Dignity Initiative.
Why Recognition, Agency and Equality?
- Recognition is about ensuring people feel seen by, and see themselves represented in, the AI products they engage with.
- Agency is about people having choices and a meaningful chance to consent.
- Equality is about people being treated equally with one another, and power differentials being minimised between users and providers.
We select recognition, agency and equality since they recur worldwide in people’s expressions of how they want to be respected, and because experimental evidence in the US has shown that they increase experiences of respect for dignity.
We acknowledge potential risks: the temptation to pair AI with less-skilled human operators, and the possibility of creating friction for users who prefer efficient transactions. Our goal is to balance these concerns while maintaining relationships with providers across distance and time. Yet by grounding our work in these principles and real-world applications, we aim to ensure that AI serves as a tool for fostering dignity, connection, and social impact.
Illustration: AI-Powered Tutoring
In recent decades, school enrollment has increased hugely in the Global South, but many students are not learning much, in overcrowded classrooms with under supported teachers, often teaching to rigid curricula that are not at the right level. Interruptions to schooling due to pandemics create still further interruptions.
In-person teaching in appropriately resourced classrooms is the preferential option we should be moving towards — but given existing resources, we might consider a ladder of options.
Our second choice of improvement would be to provide AI tools to enhance the capabilities of teachers. If there is still then a gap in access to quality education, we would then consider adding AI-powered remote tutoring for students.
Getting each of those interventions right requires thoughtful preparation of databases, and human-centred design, to match the technological capabilities, communication needs, and learning needs of children in specific developing countries at the right level of education. These are our existing principles of inclusive data and inclusive design.
Our two additional principles would suggest some extra measures:
Non-displacement would recommend providing a pipeline of support to existing teachers, such as making individual exercises accessible to use in the classroom for groups, providing reports on the progress of pupils, and closely monitoring their creativity and ownership.
Dignity affirmation might push us to design a tutoring product in which student profile formats allow those students room for self-expression through photos and bios. It might push us to build in choices and pathways through the curriculum. We might consider including effective feedback channels with clear options to request the help of a human teacher, when that help is needed.