Find out everything you need to know about Artificial Intelligence and Automation, and their applications in the working world with our new book, Working Machines – An Executive’s Guide to AI and Intelligent Automation. It is available on Amazon, Google Books, Apple Books and Audible.
In the last 5 years alone several organisations, committees and governmental bodies have been created to investigate the ethics behind the development of Artificial Intelligence technology. To a large degree, they have tried to develop a set of guiding principles when it comes to deciding whether or not a specific form of AI technology is more likely to cause good than harm. However, there are unintended consequences that slip through the cracks. Generally speaking, private companies are leading the development of AI technology, and arguably would be more concerned with profits before considering the implications that this technology may have on people’s rights. As such, globally, concerns have not only arisen with how this technology will be used, but also with who makes the decision to use this data.
A long stream of technology companies have been collecting data from their consumers for years. In previous articles we’ve spoken about recommendation systems, tagging people on social media, and we’ve hinted at how Alexa learns and gets to know your preferences every time you interact with her. But big data is generally used for other types of AI systems and solutions.
Big data and AI usually come together to allow an AI system to make predictions based on the reams of data it is given to analyse. While there are examples of how this has been implemented to great success, including helping the medical industry diagnose cancer more easily, there are also cases where it has gone very wrong. One of the most famous examples was when it was discovered by Reuters that Amazon’s recruiting engine simply didn’t like women. Engineers at Amazon provided the machine with training data of resumes from mostly male programmers, so when it came time to screen candidates, any resume that listed a women’s college or other female-focused organisations were automatically disqualified by the system. This leads us to the next core issue that has come to the forefront: bias.
Human beings are flawed and predisposed to having several biases, either conscious or unconscious. For example, we tend to fall into similarity bias when choosing who to hire or do business with. But because AI and technology are logic-based, it was initially assumed that AI solutions would be devoid of biases. That was until more and more examples of bias in AI were revealed in large ways, like what happened at Amazon.
There have been several examples where human bias has infiltrated AI programmes and resulted in either sexist, ageist or racist returns when it comes to the system’s recognition or predictive capabilities. This means that there is a distinct requirement for companies developing this technology to ensure that the training data that is being fed to the AI doesn’t contain any hidden bias, as well as consistently amending and fixing the ‘training’ materials to fix the complications when they arise.
Transparency has become one of the core issues at the forefront of the conversation around the ethics of developing certain AI technologies. If we’re honest, the tech itself and the algorithms are not an easy thing to understand, so to a large degree the layperson will never truly know how AI ‘works’. But that doesn’t mean that there shouldn’t be some sort of governance around its development, and certain rules put in place would allow people to understand how it works. Especially if it’s using predictive algorithms to help you make decisions that are ‘right’ for you.
In fact, there are AI projects that are either in process or already out in the world that hold pretty high stakes, from identifying children in need of additional care, to scoring schools, calculating fire risks and selecting food outlets for health and safety inspections. This means we need to have a better understanding of how these decisions are being made so that, as the general public, we are able to make informed opinions about their accuracy.
In the last few years, some organisations have been brought to life to ensure that AI technology is more closely governed. In a report released by the Nuffield Foundation titled, Ethical and Societal Implications of Algorithms, Data and Artificial Intelligence, it was found that there are three core issues that could stall the global governance of AI, and they are particularly human in nature:
Let’s assume that governments and not companies have taken charge of the governance of AI in its entirety, but each government has its own AI programme. Each country and its government is run differently. Values differ vastly, understanding of what is fair and right varies from person to person, so when it comes to countries you will be dealing with the same thing.
For example, how China views privacy versus how England does are vastly different. In fact, cultural differences around privacy are stark. Western culture champions privacy where in some Asian countries privacy is akin to secrecy, which is frowned upon.
If the key issues at play include the list mentioned above, transparency, bias, fairness and privacy, it seems pertinent to consider the types of questions that may help determine public opinion on how these elements will be governed. You may want to ask:
If as an organisation you were to approach your AI projects with these questions in mind, it seems reasonable that you would be developing responsible solutions that contribute to society as opposed to hindering freedoms and creating pseudo-police states. As such it is important to be cognisant of the mistakes already made, learn from them and simply do better.
We’re working on an exciting new project, so if you’re interested in reading more about AI, its uses and how to implement your own AI strategy, submit your details below and we’ll keep you in the loop on our latest developments.
An executive’s guide to AI and Intelligent Automation. Working Machines takes a look at how the renewed vigour for the development of Artificial Intelligence and Intelligent Automation technology has begun to change how businesses operate.