AI for All: Reducing Bias, Building Trust
The “reflecting canopy” in Marseille Vieux Port – L’Ombrière – mirrors the diversity of people who share this unique public space in the heart of the city.
Artificial intelligence is reshaping nearly every part of our lives — from how companies hire and doctors detect health risks to how cities manage energy and students learn, explore, and collaborate.
Yet concerns about racial, disability, and gender bias in AI and machine learning systems continue to grow, along with their wider social impacts. In the race to harness AI’s potential, many organisations face a common challenge: data quality. When AI systems are trained on biased or incomplete data, they risk reinforcing — and even amplifying — existing inequalities related not only to gender and race, but also to class, age, and disability.
In an era of rapid technological change, building trustworthy, transparent and fair AI has never been more critical. Black box systems that make decisions without explanations risk not only eroding public trust but also deepening social divides.
To create technology that truly serves humanity, we must design systems that are not only intelligent, but also ethical and capable of evolving alongside our societies and natural environments. This reflects the principle of co-evolving mutualism — a living partnership between technology and the people it serves, continuously adapting through shared learning and feedback.
For instance, the AI NOW Artificial Power 2025 Landscape Report outlines actionable strategies for the public to reclaim agency over the future of AI. The report highlights that AI isn’t just being used by us, but also on us, offering concrete steps for communities, policymakers, and the public to actively influence and redirect AI development.
This is a topic I will be exploring with a panel of experts on November 12 during the AI track at IBM Z Day 2025.
Disaggregated data in urban planning matters because the act of counting people shows that they count.
The Importance of Disaggregated Data
A study from the Berkeley Haas Centre for Equity, Gender and Leadership, which examined 133 AI systems across multiple industries, found that 44% exhibited gender bias and 25% reflected both gender and racial bias. These findings highlight the urgent need for ethical AI development practices that ensure technology benefits everyone equally.
Take urban planning, for example. When data isn’t disaggregated by sex, gender, disability, or other identity factors, it offers only a partial view of reality – overlooking key differences in how people experience cities.
Disaggregated data matters because the act of counting people shows that they count. Without it, planners and policymakers operate in ambiguity — designing policies and allocating resources for safety, transportation and public spaces without fully understanding the diverse needs, perceptions and experiences of different demographic groups, particularly those of women and girls.
By treating data as a living ecosystem — diverse, evolving, and context-aware — we can begin to build AI that reflects the full spectrum of human experience.
UNESCO developed the Readiness Assessment Methodology (RAM) — a tool that helps governments evaluate how prepared they are to govern and implement AI in an ethical and responsible way.
The Path to Ethical and Inclusive AI
In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence – the first global standard-setting framework addressing the ethical use of AI. Endorsed unanimously by all 193 UNESCO Member States, the Recommendation warns of the risks of AI embedding or amplifying bias, discrimination and inequality. It calls for principles such as transparency, explainability, human rights protections, gender equality and environmental sustainability to guide the responsible AI development.
At the same time, technology leaders are translating these ethical principles into action. With scalable and transparent tools – such as those available on IBM Z and LinuxONE – organisations can gain deeper insights into how and why AI models make certain predictions. This interpretability fosters accountability, fairness, and trust, laying the foundation for responsible innovation.
Ultimately, building ethical and inclusive AI requires collaboration between global policymakers, technologists and communities. It’s about developing technology that mirrors the diversity of real life – moving beyond technical efficiency to embrace social responsibility and human-centred design.
How we can reduce bias in AI
Creating fair and inclusive AI starts with intentional design – from the data we collect to the people who build the models.
1. Diversify Data and Development Teams
Bias often begins at the data level. Ensuring datasets include a wide range of demographic and cultural perspectives — and involving diverse teams in model design — can drastically reduce blind spots. Inclusive data leads to inclusive decisions.
Take IBM’s AI Fairness 360 toolkit, for example. Developed using diverse datasets and multi-disciplinary teams of engineers, ethicists, and social scientists, the toolkit helps detect and mitigate bias in AI models. This shows how inclusive data, combined with diverse human perspectives, can create AI systems that are fairer and more equitable across demographic groups.
2. Implement Continuous Bias Audits
AI systems should never be “set and forget.” Ongoing monitoring, testing, and auditing are vital to identify and address emerging biases over time. Regular audits help models evolve alongside society, remaining fair and relevant as cultural contexts, languages, and social norms shift.
Ethical AI is not a one-time achievement – it’s a continuous process of learning and refinement that keeps systems fair, reliable, and accountable.
Communities can build trust in AI through transparency, accountability, and community involvement in AI development and deployment
How We Can Build Trust in AI
Reducing bias is one part of the equation. The other is building trust – ensuring that AI systems are transparent, accountable, and shaped by the communities they affect.
1. Transparency and Accountability
Organisations must clearly communicate how AI systems are trained, what data they use, and how decisions are made. Explainability tools help AI systems to provide clear, human-understandable explanations for decisions and actions. Open reporting builds public confidence and encourages responsible oversight.
2. External Ethics and Oversight Committees
Independent ethics boards or review committees help ensure AI systems meet fairness and compliance standards. External oversight provides an essential safeguard against internal blind spots or commercial pressures that might compromise ethical integrity.
3. Community Engagement and Feedback Loops
Trust grows when the communities most affected by AI are included in its design and evaluation. Creating spaces for dialogue – where users can question, challenge, and influence AI systems – ensures that technology serves the many, not the few.
Like all technologies before it, AI reflects the values of its creators.
Beyond Inclusion: Toward Co-Evolving Mutualism
Traditional inclusion methods often focus on adding underrepresented voices to existing systems. While necessary, this approach can still preserve existing power dynamics instead of transforming them.
Co-evolving mutualism, by contrast, offers a more transformative approach. It treats AI development as an evolving partnership between technology and the communities it serves. Rather than designing AI for people, we design it with people – continuously adapting algorithms, data, and processes to reflect shifting needs, contexts and lived realities.
This approach moves ethical AI beyond static inclusion toward dynamic collaboration, where both technology and society learn and evolve together. It ensures AI systems are not only fair in design but resilient, responsive, and capable of addressing intersecting scenarios of gender, race, class, and accessibility.
AI holds immense potential to reflect and influence a more equitable world – if we build it responsibly. By embedding fairness, explainability, and accountability into every stage of development, we can ensure AI truly serves everyone.
Reducing bias isn’t just about better algorithms; it’s about shared responsibility. And as AI continues to evolve, our shared commitment to ethics will determine whether this technology deepens divisions or bring us closer together.
When we design AI systems that learn from the world and give back to it – balancing innovation with empathy, intelligence with co-evolution – we create technology that not only serves humanity but helps it thrive.
Join the Conversation!
How have you seen bias show up in AI — in hiring, education, healthcare, or public services? What changes would make these systems fairer and more transparent?
To hear more about reducing bias and building trust in AI, join the AI Track at IBM Z Day 2025.
My guests:
Zinnya del Villar, Director of Data, Technology, and Innovation, Data-Pop Alliance
Jayesh Nair, AI Product Manager, IBM