Looking for a conclusive guide to artificial intelligence, automation and their applications? WeAreBrain’s new book, Working Machines – An Executive’s Guide to AI and Intelligent Automation, takes a comprehensive look at this incredible technology in an easy-to-understand non-techie way. Purchase it from Amazon, Google Books, Apple Books or Audible.
As perhaps the world’s fastest-growing scientific and technological discipline, artificial intelligence (AI) has already changed our daily lives in fundamental ways. Rules-based, repetitive tasks traditionally performed by humans are now executed by machines and algorithms cheaper, more reliably and more accurately thanks to AI. So, what’s new?
As AI becomes the norm in society, important decisions impacting much of the world’s economy and way of life are increasingly being made by machine algorithms. We are seeing that the consequences of technology built a few years ago have much broader ramifications on our society: election results, information spread, autonomous vehicles and weapons, fighting and diagnosing diseases. These developments are now requiring us to think far more closely not only about our design principles for responsible AI, but also its role in our society and the moral and legal implications of its use.
According to Fei-Fei Li, Ph.D., professor of computer science and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, three technological advancements have contributed to the global AI explosion in relatively recent years (considering AI has been around for over 5 decades). These technological advancements are hardware and computing, algorithms, and big data. Together, they have contributed to a tangible force of change in society. We’ve seen this brand of change manifest itself in a variety of positive ways through the use of ethical AI, such as self-driving cars, medical advancements, global business efficiency, etc. However, at this stage, society cannot completely ensure that technology will not yield negative effects of poorly thought-out tech, which could result in job displacement, bias, privacy infringement and more. Why?
As more businesses and governments rely on AI/ML algorithms, decisions influenced by AI thinking and worldviews will increasingly affect people’s daily lives. This opens the door to machine learning (ML)-based AI systems trained with incomplete or distorted data to result in a warped “worldview”. This can magnify prejudice and inequality, spread rumours and fake news, and even cause physical harm. As society continues to adopt more AI/ML services, the urgent need for a centralised approach to ethical AI development has never been more imperative.
To address these concerns, many scientific and higher learning institutions such as Stanford University, UC Berkeley and MIT, have established human-centred artificial intelligence research institutes. The aim is to bring awareness to the development and deployment of AI, bring a common ethical agenda into AI development and use it to guide the development of intelligent machines. As a leading pioneer of the HAI methodology, Professor Li has outlined the guiding principles of HAI:
Let’s break down each of the guiding principles of HAI:
This involves a collective interdisciplinary approach to AI development in order to understand, guide and anticipate its impact on human society. This means the collaborative efforts of a wide range of human-centred disciplines to integrate into AI development, including engineering, social sciences, philosophy and ethics, historians, political scientists, cognitive neuroscientists and machine learning specialists.
According to Li, these groups must address important issues such as turning machine-bias in areas such as race or gender, into ‘machine-fairness’ by working to improve fairness in data sets, algorithms, computer theories and decision-making, and ethics.
The second principle of HAI is self-explanatory and should quell any lingering societal notions that AI exists to replace humans in the workplace – simply not true! The defining principle of artificial intelligence is and has always been, to augment and enhance human capabilities and propel society as a whole forward, quickly and efficiently. However, it is a point needing to be insisted on and actively promoted. AI is here to serve us, and it currently does so by automating repetitive tasks to free up time for humans to focus on more complex and creative tasks which machines currently cannot match. AI is assisting humanity in far-reaching and profound ways and will never threaten to replace humans entirely.
The primary aim of HAI is to develop intelligent machine-learning algorithms which work closely and harmoniously with and for humans. In order to successfully and accurately do this, AI needs to deeply understand human intelligence in order to ‘think’ similarly to us. This is understandably tricky as human intelligence is a dynamic and complex multi-sensory highway lit with our vastly differing (and competing) contextual emotional quotients.
Today’s AI is far removed from our human complexity and only operates within prescribed rules-based parameters, resulting in static and disembodied ‘intelligence’. Li hopes HAI efforts galvanise the move to bridge the gap by urging developers to create machines inspired by complex human intelligence.
In her Op-Ed piece in the New York Times, Li explains that humans are born with an innate curiosity which leads us as babies to play and tinker with the world surroundings. This helps us make sense of each item’s meaning and functionality, and its place within our overall environment. In order to mimic this learning style, Li and others hope to develop AI systems that are intrinsically motivated by their environment. The hope is that this development style will show behavioural patterns that mirror aspects of human learning, such as learning in stages and learning to focus on objects without being directed.
Proponents of HAI, including Li, are focussed on ramping up the quality of our AI/ML development by creating ways in which machines can learn from humans by asking questions and eliciting answers which reveal the nuanced way we perceive the visual world. Li and her team at the Stanford Institute for Human-centered Artificial Intelligence are developing AI systems which have learned to ask more detailed and engaging questions, and to respond with specific information based on learning. For example, the team’s AI system can identify and differentiate a rose from a sunflower, and not just any flower. While this may seem quite simple, the research is still in its infancy and as the first round of results, it is looking very promising indeed.
The aim for HAI is clear: to develop AI/ML systems to mimic human’s intelligence, decision-making and emotional intelligence spectrums in order to benefit humanity’s growth raided by ethical machines. As we look to a more technologically-advanced world with algorithms becoming key drivers in socio-economic growth, it is indeed comforting to know researchers such as Li are calling for collaboration and ethical review in AI development.
An executive’s guide to AI and Intelligent Automation. Working Machines takes a look at how the renewed vigour for the development of Artificial Intelligence and Intelligent Automation technology has begun to change how businesses operate.