Human-centered AI: Finding EQ in algorithms

Date
April 21, 2025
Hot topics 🔥
AI & Tech
Contributor
Mario Grunitz
Summarize with AI:
A figurine looking at a tablet

Looking for a conclusive guide to artificial intelligence, automation and their applications? WeAreBrain’s new book, Working Machines – An Executive’s Guide to AI and Intelligent Automation, takes a comprehensive look at this incredible technology in an easy-to-understand non-techie way. Purchase it from Amazon, Google Books, Apple Books or Audible.

As perhaps the world’s fastest-growing scientific and technological discipline, artificial intelligence (AI) has already changed our daily lives in fundamental ways. Rules-based, repetitive tasks traditionally performed by humans are now executed by machines and algorithms cheaper, more reliably and more accurately thanks to AI. So, what’s new?

The importance of context

As AI becomes the norm in society, important decisions impacting much of the world’s economy and way of life are increasingly being made by machine algorithms. We are seeing that the consequences of technology built in recent years have much broader ramifications on our society: election results, information spread, autonomous vehicles, fighting and diagnosing diseases. By 2024, the FDA had approved 950 AI-enabled medical devices, a sharp rise from just six in 2015 and 221 in 2023. These developments are now requiring us to think far more closely not only about our design principles for responsible AI, but also its role in our society and the moral and legal implications of its use.

The case for Human-centered Artificial Intelligence

According to Fei-Fei Li, Ph.D., professor of computer science and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, three technological advancements have contributed to the global AI explosion in relatively recent years (considering AI has been around for over 5 decades). These technological advancements are hardware and computing, algorithms and big data. Together, they have contributed to a tangible force of change in society. We’ve seen this brand of change manifest itself in a variety of positive ways through the use of ethical AI, such as self-driving cars, medical advancements and global business efficiency. Waymo, one of the largest U.S. operators, now provides over 150,000 autonomous rides each week, demonstrating how AI is rapidly moving from the lab to daily life.

However, at this stage, society cannot completely ensure that technology will not yield negative effects of poorly thought-out tech, which could result in job displacement, bias, privacy infringement and more. As more businesses and governments rely on AI/ML algorithms, decisions influenced by AI thinking and worldviews will increasingly affect people’s daily lives. This opens the door to machine learning (ML)-based AI systems trained with incomplete or distorted data to result in a warped “worldview”. This can magnify prejudice and inequality, spread rumours and fake news, and even cause physical harm. According to the AI Incidents Database, the number of AI-related incidents rose to 233 in 2024, a record high and a 56.4% increase over 2023.

As society continues to adopt more AI/ML services, the urgent need for a centralised approach to ethical AI development has never been more imperative.

What is Human Artificial Intelligence (HAI)?

To address these concerns, many scientific and higher learning institutions such as Stanford University, UC Berkeley and MIT have established human-centred artificial intelligence research institutes. The aim is to bring awareness to the development and deployment of AI, bring a common ethical agenda into AI development and use it to guide the development of intelligent machines. As a leading pioneer of the HAI methodology, Professor Li has outlined the guiding principles of HAI:

  • Development of AI must be guided by a concern for its human impact
  • AI should strive to augment and enhance humans, not replace us
  • AI must be more inspired by human intelligence

Let’s break down each of the guiding principles of HAI:

1. Development of AI must be guided by a concern for its human impact

This involves a collective interdisciplinary approach to AI development in order to understand, guide and anticipate its impact on human society. This means the collaborative efforts of a wide range of human-centred disciplines to integrate into AI development, including engineering, social sciences, philosophy and ethics, historians, political scientists, cognitive neuroscientists and machine learning specialists.

According to Li, these groups must address important issues such as turning machine-bias in areas such as race or gender into ‘machine-fairness’ by working to improve fairness in data sets, algorithms, computer theories and decision-making, and ethics.

2. AI should strive to augment and enhance humans, not replace us

The second principle of HAI is self-explanatory and should quell any lingering societal notions that AI exists to replace humans in the workplace. The defining principle of artificial intelligence is, and has always been, to augment and enhance human capabilities and propel society as a whole forward, quickly and efficiently. In 2024, 78% of organisations reported using AI, up from 55% the year before, demonstrating how businesses are embracing AI as a collaborative tool rather than a replacement.

However, it is a point needing to be insisted on and actively promoted. AI is here to serve us, and it currently does so by automating repetitive tasks to free up time for humans to focus on more complex and creative tasks which machines currently cannot match. AI is assisting humanity in far-reaching and profound ways and will never threaten to replace humans entirely.

3. AI must be more inspired by human intelligence

The primary aim of HAI is to develop intelligent machine-learning algorithms which work closely and harmoniously with and for humans. In order to successfully and accurately do this, AI needs to deeply understand human intelligence in order to ‘think’ similarly to us. This is understandably tricky, as human intelligence is a dynamic and complex multi-sensory highway lit with our vastly differing (and competing) contextual emotional quotients.

Today’s AI is far removed from our human complexity and only operates within prescribed rules-based parameters, resulting in static and disembodied ‘intelligence’. Li hopes HAI efforts galvanise the move to bridge the gap by urging developers to create machines inspired by complex human intelligence.

The future of Human Artificial Intelligence

Proponents of HAI, including Li, are focused on ramping up the quality of our AI/ML development by creating ways in which machines can learn from humans by asking questions and eliciting answers which reveal the nuanced way we perceive the visual world. The aim for HAI is clear: to develop AI/ML systems to mimic human intelligence, decision-making and emotional intelligence spectrums in order to benefit humanity’s growth guided by ethical machines.

As we look to a more technologically-advanced world with algorithms becoming key drivers in socio-economic growth, it is indeed comforting to know researchers such as Li are calling for collaboration and ethical review in AI development. “AI is a civilisation-changing technology — not confined to any one sector, but transforming every industry it touches,” as Russell Wald, Executive Director at Stanford HAI, reminds us. The future of AI depends on our commitment to keeping it human-centred.

SaveSaved
Summarize with AI:

Mario Grunitz

Mario is a Strategy Lead and Co-founder of WeAreBrain, bringing over 20 years of rich and diverse experience in the technology sector. His passion for creating meaningful change through technology has positioned him as a thought leader and trusted advisor in the tech community, pushing the boundaries of digital innovation and shaping the future of AI.
Woman holding the Working machines book

Working Machines

An executive’s guide to AI and Intelligent Automation

Working Machines eBook