Introduction to AI inclusivity
Despite rapidly reaching the point of becoming pervasive across every aspect of modern society, Artificial Intelligence (AI) development is surprisingly still in its infancy. The architects of our digital world at Microsoft metaphorically refer to AI as a ‘child that society collectively raises’.
As a ‘child’, AI is then an inevitable product of its upbringing, shaped by the hands of those who create it and nurture its development. It’s no secret that those who raise us will consciously (and unconsciously) feed their, perspectives, biases and values into us.
The same is true for the development of AI. The people who create it tend to unintentionally plant their perspectives into what they build. Machine Learning (ML), Deep Learning (DL), and Deep Neural Networks (DNN) are designed to mimic human’s decision-making abilities and basic cognitive processes. So, it is not a surprise that what we are creating shadows both the good and bad elements of the human psyche.
Thus, we must nurture and guide AI development to become a technology that resembles a society based on the best possible attributes of humanity: one that is kind, considerate, and above all, inclusive.
How can we achieve this? Well, it’s a bit complex but not impossible. But before we dive into the how, let’s explore why we need inclusive AI development to shape an equitable future.
Human beings are flawed and we are predisposed to having several biases, either conscious or unconscious. Because current AI is logic-based, it was initially assumed that AI solutions would be devoid of biases. But our technology is a reflection of those who create it, and therein lies the issue of unwittingly embedding bias into a logic-based system.
For example, engineers at Amazon trained their recruitment software with resumes from mostly male programmers, so when it came time to screen candidates, any resume that listed female elements was automatically disqualified by the system.
There have been other examples where human bias has infiltrated AI programs and resulted in either sexist, ageist or racist returns when it comes to the system’s recognition or predictive capabilities. This means that there is a distinct requirement for companies developing this technology to ensure that the training data that is being fed to the AI doesn’t contain any hidden bias, as well as consistently amending and fixing the ‘training’ materials to repair the complications when they arise.
Gender representation in tech
Despite the industry’s rapid progress, there is an element that seems to be lagging behind: gender diversity. Female representation in the global tech industry is alarmingly low: as of 2020, only about 25% of Big Tech’s (Google, Apple, Facebook, Amazon, Microsoft) employees were female. For an industry praised for its innovative progress, there is still work to be done to reach gender parity.
Thankfully, there are ways to combat this. We can encourage young girls at a grassroots level to take up studies in engineering and coding. We can do this in a few ways: we can dispel the stigma that STEM fields are primary for males, we can eradicate the tech industry’s “bro culture”, and we can inspire young girls to enter the industry by promoting successful female role models in tech on the same level as we do for the likes of Steve Jobs, Elon Musk and Jeff Bezos.
We need to create a gender-diverse tech industry that promotes prominent and successful women in the industry to inspire and motivate future generations of young women to pursue a career in tech.
The ethical dilemma behind AI development
We believe is important to keep everyone informed on the latest AI regulations on a global scale. So we created a site to follow the most recent news and information regarding AI regulation in the EU, USA, and China. Visit the site here.
As we are still discovering the true capabilities and real-world consequences of AI’s widespread adoption, the need to develop a set of guiding principles and governance for AI development is crucial.
Generally, private companies are leading the development of AI technology, and arguably would be more concerned with profits before considering the implications that this technology may have on people’s rights. As such, concerns have not only arisen with how this technology will be used, but also with who makes the decision to use it.
Concerns over how big data is collected and used, and issues surrounding transparency within AI development and rollout has gained the attention of governments. The EU, USA, and China have already begun to implement their own versions of AI regulation with the aim of ensuring that AI and similar technologies benefit society rather than impede on human rights. Basic AI regulations are divided into three topics: governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.
Globally, there is a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations. Insufficient attention has been given to the tensions between ideals and values. Additionally, there is not enough understanding of both the technological capabilities and their potential impacts, as well as the perspective of the general public on the development of these solutions.
To create truly inclusive AI, the core ethical elements need to be addressed and a consensus needs to be reached between all governance structures.
Despite the vast benefits technological disruptions provide, they also come at a worryingly greater cost to the global workforce of tomorrow: algorithm-centric business approaches potentially exacerbating economic inequality.
AI-powered organisations have a ‘code ceiling’ that prevents upward mobility and career advancement due to the fact that most of the incoming workforce — especially positions requiring manual labour — rarely interact with human coworkers but instead are managed by algorithms.
For example, Amazon has used AI to track hundreds of fulfilment centre employees and fire them for failing to meet productivity quotas. “Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors” according to the report.
Daily tasks being managed by an algorithmic feedback loop creates systemic inequality due to the hierarchy of information flow. In these cases, company leaders only share information with employees that help them perform their duties, and there is no requirement to inform workers of holistic business approaches and strategies. This reduces the ability for workers to gain upward mobility within an organisation as they are only fed information pertaining to their specific tasks.
A future gig economy that is run on algorithm-driven job leads managed by smart devices could create a global workforce of employees tied into performing specific tasks at a specific wage. The algorithms will be controlled by a small group of employers which leaves little to no room for upward mobility and economic equality.
But machines will likely take over repetitive jobs, giving us an exciting opportunity to leverage our creativity, critical thinking, and problem-solving skills to create a future society that rewards human-centricity.
How to ensure an equitable future for AI
As more businesses and governments rely on AI/ML algorithms, decisions influenced by AI thinking and worldviews will increasingly affect people’s daily lives. This opens the door to Machine Learning-based AI systems trained with incomplete or distorted data to result in a warped ‘worldview’.
This can magnify prejudice and inequality, spread rumours and fake news, and even cause physical harm. As society continues to adopt more AI/ML services, the urgent need for a centralised approach to ethical AI development has never been more imperative.
To address these concerns, many scientific and higher learning institutions such as Stanford University, UC Berkeley and MIT have established Human-centred Artificial Intelligence (HAI) research institutes. The aim is to bring awareness to the development and deployment of AI, bring a common ethical agenda into AI development and use it to guide the evolution of intelligent machines.
As a leading pioneer of the HAI methodology, Professor Li from Stanford University has outlined the guiding principles of HAI:
1. AI development must be guided by a concern for its human impact
To guide AI’s impact on human society, a collaborative effort of human-centred disciplines must address issues like turning machine bias in areas such as race or gender, into ‘machine fairness’. This can be achieved by improving fairness in data sets, algorithms, computer theories and decision-making, and ethics.
2. AI should strive to augment and enhance humans, not replace us
AI is designed to augment and enhance human capabilities to propel society forward quickly and efficiently – not replace us.
3. AI must be more inspired by human intelligence
Human-centred Artificial Intelligence is about developing algorithms that work harmoniously with and for humans. To achieve this, AI needs to deeply understand human intelligence in order to ‘think’ similarly to us. We need to create AI that is recognisable to the way we perceive the world.
The goal is to create ways in which machines can learn from humans by asking questions and eliciting answers which reveal the nuanced way in which we perceive the visual world.
To do this, we need to develop AI/ML systems that mimic human intelligence, decision-making and emotional intelligence spectrums in order to drive humanity’s growth aided by ethical machines.
In order to steer society toward an equitable future powered by AI technologies, it is imperative we focus today on inclusive development practices. We can raise AI to reflect the best attributes of human nature by being conscious of all forms of bias embedded into existing training systems, and ensuring that future systems are grounded in inclusivity.
It is up to those who create AI to shape it into the best possible version of humanity to benefit our future society. The first step is acknowledging our flaws that are unconsciously fed into AI systems, then we can work towards ensuring a consistent bedrock of ethical and inclusive values that will shape future AI development.