Artificial Intelligence is a powerful technology that is shaping our world. Some AI
systems can deliver harmful, discriminatory outcomes that have further exacerbated
societal inequities. For example, some AI enabled systems make decisions about who
is approved for a loan, selected for a job interview or whether the police show up in
certain neighbourhoods. Institutions rely on these systems, but the technology is far
from neutral or objective. Systemic inequities are encoded into technology, further
amplifying unjust outcomes. The problem gets worse as our dependence on technology
increases.
But, how exactly, do ethical issues in AI arise? One way of thinking about this is to
break things down into three areas – data, models and people.
The Devil is in the Data
Data is key ingredient for AI systems. Some call data “the new oil” because without a lot
of data, AI systems that use machine learning would not be all that functional. Decisions
around gathering data have power structures baked into them. For example, if you
design a survey, you get to control how many questions are asked, what questions are asked and if there are drop down answers or free text boxes. You will also decide how
to reach people to complete your survey and you may choose to target a certain group
of people. In essence, you determine the purpose and method of data collection and
your choices become encoded in the process.
Historically, we may have more data about some people or things than others because
of these decision-making processes. That leads to gaps in available data. For example,
we’ve historically collected more medical research data for men than women. This
means that we often don’t have adequate medical datasets for women. If that historical
data is then used to power an AI system, it may generate biased outcomes. This is
known as algorithmic bias.
In our highly digital world, data is being collected all the time. Every online search, every
click, every keyboard stroke, we are contributing to a vast web of data collection. There
is also meta-data, or data about data, such as geo-location data for your phone. This
large volume of constant data collection raises many concerns about privacy and
consent, which also contributes to ethical issues in AI.
Mathematical models are not neutral.
Machine learning models can contain bias. Cathy O’Neil, a famous data scientist and
author of the book “Weapons of Math Destruction” says that “models are opinions
embedded in mathematics.” AI developers make many decisions during the design
process, such as deciding what techniques to apply to develop a model, engineering the
features that will be contained in a model, and determining the hyper parameters, the
over arching guidelines, for training a model. Each decision represents a value
judgment on the part of the person making it. For example, using deep neutral networks
in order improve technical accuracy may reduce how explainable the model is to
people. That trade off is a value judgement which prioritizes accuracy over
explainability. There are many decisions like this being made as models are
constructed.
People who make a technology shape a technology.
There is a body of work that demonstrates that the people who make a technology
shape that technology. We see this in game design. Female characters in a video game
are often scantily clad and sexualized, reflecting the fantasies of their mostly young,
male creators. Most AI developers are white or Asian men from a certain socio-
economic background. Their worldview and values inform the technology. In addition,
larger system, such as how funding is allocated, can also set the agenda for the types
of technologies that are developed. This was evident in the early days of AI, as much of
the funding came from the military during the 1950s and 1960s. Today, large players
like Google, Amazon, Facebook and Apple are driving the agenda.
This is not an exhaustive list, just a quick overview of how data, models and people
contribute to the ethical issues we see in AI systems.
To learn more, check out the PowerEd AI Ethics Micro-Credential at Athabasca University.
By Katrina Ingram, CEO, Ethically Aligned AI _______
Sign up for our newsletter to have new blog posts and other updates delivered to you each month!
Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2022 Ethically Aligned AI Inc. All right reserved.