
Introduction to AI inclusivity
Despite rapidly reaching the point of becoming pervasive across every aspect of modern society, Artificial Intelligence (AI) development is surprisingly still in its infancy. The architects of our digital world at Microsoft metaphorically refer to AI as a ‘child that society collectively raises’.
As a ‘child’, AI is then an inevitable product of its upbringing, shaped by the hands of those who create it and nurture its development. It’s no secret that those who raise us will consciously (and unconsciously) feed their, perspectives, biases and values into us.
The same is true for the development of AI. The people who create it tend to unintentionally plant their perspectives into what they build. Machine Learning (ML), Deep Learning (DL), and Deep Neural Networks (DNN) are designed to mimic human’s decision-making abilities and basic cognitive processes. So, it is not a surprise that what we are creating shadows both the good and bad elements of the human psyche.
The stakes have never been higher. The broader generative AI market reached $66.89 billion in 2025 and is expected to grow to $442.07 billion by 2031, representing one of the fastest-growing technology sectors globally. This massive expansion means that the biases and perspectives embedded in today’s AI systems will influence billions of decisions across healthcare, education, employment, criminal justice, and virtually every aspect of human experience.
Thus, we must nurture and guide AI development to become a technology that resembles a society based on the best possible attributes of humanity: one that is kind, considerate, and above all, inclusive.
How can we achieve this? Well, it’s a bit complex but not impossible. But before we dive into the how, let’s explore why we need inclusive AI development to shape an equitable future.
Identify bias
Human beings are flawed and are predisposed to having several biases, either conscious or unconscious. One example seen often in a business context is similarity bias, which is a tendency to favour people who display the same or similar qualities as we do.
Because current AI is logic-based, it was initially assumed that AI solutions would be devoid of biases. But our technology is a reflection of those who create it, and therein lies the issue of unwittingly embedding bias into a logic-based system.
For example, engineers at Amazon trained their recruitment software with resumes from mostly male programmers, so when it came time to screen candidates, any resume that listed female elements was automatically disqualified by the system.
There have been other examples where human bias has infiltrated AI programs and resulted in either sexist, ageist or racist returns when it comes to the system’s recognition or predictive capabilities. This means that there is a distinct requirement for companies developing this technology to ensure that the training data that is being fed to the AI doesn’t contain any hidden bias, as well as consistently amending and fixing the ‘training’ materials to repair the complications when they arise.
Current research indicates that 54% of people can distinguish between AI-generated and human-made content, suggesting that whilst AI sophistication increases, detectable differences remain.
Gender representation in tech
Despite the industry’s rapid progress, there is an element that seems to be lagging behind: gender diversity. Female representation in the global tech industry is alarmingly low: as of 2020, only about 25% of Big Tech’s (Google, Apple, Facebook, Amazon, Microsoft) employees were female. For an industry praised for its innovative progress, there is still work to be done to reach gender parity.
The implications extend far beyond representation statistics. When AI systems are developed primarily by homogeneous teams, they inevitably reflect limited perspectives and experiences. This homogeneity can result in AI systems that work exceptionally well for some demographic groups whilst performing poorly or even harmfully for others.
Thankfully, there are ways to combat this. We can encourage young girls at a grassroots level to take up studies in engineering and coding. We can do this in a few ways: we can dispel the stigma that STEM fields are primary for males, we can eradicate the tech industry’s “bro culture”, and we can inspire young girls to enter the industry by promoting successful female role models in tech on the same level as we do for the likes of Steve Jobs, Elon Musk and Jeff Bezos.
We need to create a gender-diverse tech industry that promotes prominent and successful women in the industry to inspire and motivate future generations of young women to pursue a career in tech.
The ethical dilemma behind AI development
We believe is important to keep everyone informed on the latest AI regulations on a global scale. So we created a site to follow the most recent news and information regarding AI regulation in the EU, USA, and China. Visit the site here.
As we are still discovering the true capabilities and real-world consequences of AI’s widespread adoption, the need to develop a set of guiding principles and governance for AI development is crucial.
Generally, private companies are leading the development of AI technology, and arguably would be more concerned with profits before considering the implications that this technology may have on people’s rights. As such, concerns have not only arisen with how this technology will be used, but also with who makes the decision to use it.
Concerns over how big data is collected and used, and issues surrounding transparency within AI development and rollout has gained the attention of governments. The EU, USA, and China have already begun to implement their own versions of AI regulation with the aim of ensuring that AI and similar technologies benefit society rather than impede on human rights. Basic AI regulations are divided into three topics: governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
Globally, there is a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations. Insufficient attention has been given to the tensions between ideals and values. Additionally, there is not enough understanding of both the technological capabilities and their potential impacts, as well as the perspective of the general public on the development of these solutions.
To create truly inclusive AI, the core ethical elements need to be addressed and a consensus needs to be reached between all governance structures.
Algorithm inequality
Despite the vast benefits technological disruptions provide, they also come at a worryingly greater cost to the global workforce of tomorrow: algorithm-centric business approaches potentially exacerbating economic inequality.
AI-powered organisations have a ‘code ceiling’ that prevents upward mobility and career advancement due to the fact that most of the incoming workforce — especially positions requiring manual labour — rarely interact with human coworkers but instead are managed by algorithms.
For example, Amazon has used AI to track hundreds of fulfilment centre employees and fire them for failing to meet productivity quotas. “Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors” according to the report.
Daily tasks being managed by an algorithmic feedback loop creates systemic inequality due to the hierarchy of information flow. In these cases, company leaders only share information with employees that help them perform their duties, and there is no requirement to inform workers of holistic business approaches and strategies. This reduces the ability for workers to gain upward mobility within an organisation as they are only fed information pertaining to their specific tasks.
A future gig economy that is run on algorithm-driven job leads managed by smart devices could create a global workforce of employees tied into performing specific tasks at a specific wage. The algorithms will be controlled by a small group of employers which leaves little to no room for upward mobility and economic equality.
But machines will likely take over repetitive jobs, giving us an exciting opportunity to leverage our creativity, critical thinking, and problem-solving skills to create a future society that rewards human-centricity.
How to ensure an equitable future for AI
As more businesses and governments rely on AI/ML algorithms, decisions influenced by AI thinking and worldviews will increasingly affect people’s daily lives. This opens the door to Machine Learning-based AI systems trained with incomplete or distorted data to result in a warped ‘worldview’.
This can magnify prejudice and inequality, spread rumours and fake news, and even cause physical harm. As society continues to adopt more AI/ML services, the urgent need for a centralised approach to ethical AI development has never been more imperative.
To address these concerns, many scientific and higher learning institutions such as Stanford University, UC Berkeley and MIT have established Human-centred Artificial Intelligence (HAI) research institutes. The aim is to bring awareness to the development and deployment of AI, bring a common ethical agenda into AI development and use it to guide the evolution of intelligent machines.
As a leading pioneer of the HAI methodology, Professor Li from Stanford University has outlined the guiding principles of HAI:
1. AI development must be guided by a concern for its human impact
To guide AI’s impact on human society, collaborative efforts from human-centred disciplines must address critical issues like transforming machine bias in areas such as race, gender, age, and socioeconomic status into ‘machine fairness’. This can be achieved by improving fairness in datasets, algorithms, computational theories, decision-making processes, and ethical frameworks that govern AI development and deployment.
2. AI should strive to augment and enhance humans, not replace us
AI is designed to augment and enhance human capabilities to propel society forward efficiently and equitably, not to replace human judgment, creativity, and empathy. The most successful AI implementations combine artificial intelligence capabilities with human insight, creating systems that are more powerful and more ethical than either could achieve independently.
3. AI must be more inspired by human intelligence
Human-centred Artificial Intelligence focuses on developing algorithms that work harmoniously with and for humans across diverse communities and cultures. To achieve this, AI needs to deeply understand human intelligence, emotion, and social dynamics to ‘think’ in ways that complement rather than compete with human capabilities.
We need AI that recognises and respects the nuanced ways different communities perceive the world, make decisions, and organise their societies. The goal is creating ways machines can learn from humans by asking thoughtful questions and eliciting responses that reveal diverse perspectives and experiences.
To accomplish this, we must develop AI/ML systems that incorporate human intelligence, decision-making processes, and emotional intelligence spectrums to drive humanity’s growth aided by ethical machines that serve everyone’s interests equitably.
Building inclusive AI development practices
Diverse development teams
Creating truly inclusive AI requires development teams that reflect the diversity of communities these systems will serve. This means actively recruiting and supporting developers, data scientists, and AI researchers from different backgrounds, cultures, genders, ages, and lived experiences.
Community-centered design
Inclusive AI development involves communities from the earliest stages of system design through deployment and ongoing monitoring. This participatory approach ensures AI systems are designed to serve real community needs rather than reflecting assumptions about what communities need.
Continuous monitoring and improvement
Inclusive AI isn’t a one-time achievement but an ongoing commitment to monitoring system performance across different demographic groups and continuously improving fairness, accuracy, and accessibility.
Summary
In order to steer society toward an equitable future powered by AI technologies, it is imperative we focus today on inclusive development practices. We can raise AI to reflect the best attributes of human nature by being conscious of all forms of bias embedded into existing training systems, and ensuring that future systems are grounded in inclusivity.
With the generative AI market expected to show an annual growth rate of 36.99%, reaching $442.07 billion by 2031, the decisions we make about AI development today will shape the experiences of billions of people for decades to come.
It is up to those who create AI to shape it into the best possible version of humanity to benefit our future society. The first step is acknowledging our flaws that are unconsciously fed into AI systems, then we can work towards ensuring a consistent bedrock of ethical and inclusive values that will shape future AI development.