AI technology is transforming the world we live in, but it’s not without its ethical challenges. In this article, we’ll explore the ethical implications surrounding artificial intelligence development and usage, such as privacy, bias, and accountability.
We’ll look at real-world examples of artificial intelligence gone wrong and discuss how developers, businesses, and policymakers can work together to create a more responsible and equitable AI future.
What Is Ethics in AI?
Ethics refers to a common understanding of moral principles and what is right and what is wrong. To define what is fair and unfair. These concepts can refer to individual rights and obligations as well as to communities and even the environment.
But if we already have these largely defined, why do we need specific AI ethics definitions? It’s because artificial intelligence is more than just software that runs on a machine. When decisions are made by AI-powered tools, there are real-world consequences for people, businesses, communities, and the environment.
Naturally, the idea behind using AI tools and machine learning is to be for the good of society. But who defines what is good and bad when it comes to AI applications?
AI technologies are advancing at an exponential rate. As a consequence, AI engineers ask, “Can we do it?” long before anyone thinks about “Should we do it?”
And while building functionality into artificial intelligence is often done with the best intentions, not everyone has good intentions. Malicious actors are often the first adopters when it comes to new technology.
But does AI really present a potential real risk to society? What are the risks?
AI Benefits vs. AI Risks
The AI alarmists foresee plenty of ethical issues and the inevitable demise of humans if AI tech continues to grow unabated. AI apologists see nothing but the upside as they excitedly find new applications to throw AI at.
A more balanced approach is to acknowledge the benefits and risks, both present and future.
Besides benefits to business, AI is used on a daily basis to improve the lives of people and communities. Things like better management of ecological resources, city planning, improved hospitals, accurate disaster management, and weather prediction are examples of AI being used for the “greater good.”
The potential ethical risks that artificial intelligence presents are just as real and present as the benefits.
Bias and discrimination – AI algorithms make decisions based on the data they’re trained on. If there is prejudice or bias present in the data, then the algorithm will perpetuate those biases and even amplify them. AI is currently used to determine things like who gets hired, who gets admitted to a college, or how the law is applied in court.
Lack of Transparency and Explainability – If you don’t know what data the AI was trained on or how it arrives at the decisions it does, then can you trust it? Could a person in authority say, “The AI made me do it,” and shift accountability to an opaque AI algorithm?
Economic disruption – Does the fact that artificial intelligence can do certain jobs faster and cheaper mean we should replace employees with machines? Some industries will be disproportionally affected by AI tools replacing jobs.
Privacy and Surveillance – With AI tools being connected to the internet, how much data should they be allowed access to? Facial recognition, security cameras, and online tracking of payment methods all hold a slew of ethical privacy concerns. These applications of AI are great for catching criminals, but is it ok to collect the same data for law-abiding citizens?
Dependence on AI – Using AI tools is changing how we think. When we let machines think for us, there’s a risk of losing our own critical thinking skills. Even the impact of generative AI on creating art and literature is fraught with ethical questions.
Ethics and AI – When it All Goes Wrong
As challenging as it may be, AI incidents of ethical or moral errors highlight the need for engineers and data scientists to find better ways to address this issue.
Here are a few intentional and unintentional examples of when AI systems made things a little awkward:
- Facebook had to apologize when its vision model labeled images of black men as “primate.”
- A Chinese application for booking flights and hotels called CTrip recommended the same products at different prices depending on the user’s portrait.
- An automated passport-checking robot in New Zealand rejected an Asian man’s passport because it said his eyes were closed.
- In 2020, the YouTube algorithm was found to have disproportionally recommended election fraud content to users that were more skeptical of the election’s legitimacy to begin with.
- A popular chess YouTuber had his channel blocked when one of his videos got flagged for hate speech. His discussion of “black vs. white” is suspected of triggering an overeager AI filter.
- Amazon used an AI hiring tool that was trained on a data set that included mainly male resumes. As a result, it was much less likely to select female candidates when evaluating new applications.
It’s clear that an ethical AI framework is required to stop these kinds of things from happening and to preserve civil society and human dignity.
The Ethical Framework for AI
At UNESCO’s General Conference in November 2021, all 193 member states adopted the Recommendation on the Ethics of Artificial Intelligence. This was the first document to define a global standard to promote AI ethics in AI research and AI development.
It is an extremely detailed document, but in essence, it tries to translate agreed international human rights into the AI space. The intention is that values like inclusivity, diversity, non-discrimination, safety and security, and privacy not be compromised by AI.
Some of the agreed fundamental principles that should govern the ethical framework for AI and make decisions around ethical dilemmas are:
Human-centered – AI should be applied for the good of humans. It should have a beneficial, positive purpose. As with the doctor’s oath of “Do no harm,” AI should place human life and rights above other considerations.
Fair and Unbiased – AI should promote social well-being by being fair and unbiased.
Privacy and Security – Private information and privacy should not be compromised by AI. This is one of the harder areas to manage, considering the volume of data used to train these models. Even Samsung has struggled to keep its trade secrets from being leaked.
Reliable and Safe – AI tools should provide predictably good outcomes and avoid safety risks. It should also prevent harmful use by people with malicious intent. When AI provides what it believes are facts, they should be true. It should be able to distinguish between what is true and what is popular opinion. Poorly trained AI is extremely efficient at spreading and generating fake news.
Transparent and Explainable AI – It should be clear what data is used for training the AI, as well as the decision-making process it follows. If you don’t have a reasonable idea of how an AI comes to a conclusion, then you can’t really trust it or have a basis to question its decision.
Accountability / Governable – An AI system needs to do what we intend it to do. It should always have human oversight with the legal and ethical accountability attributed to the user, not the AI. Accountability should be ensured by due diligence mechanisms like auditability and traceability.
The Challenges of Making Ethics Work with AI
When you look at the framework, you’d probably agree that those are all great guidelines to have AI follow. The problem is that we’re expecting a machine to make moral decisions.
As an example, imagine an autonomous car driving when suddenly a little old lady and her grandchild step out in front of it. If the car knows it doesn’t have time to break and has to swerve to miss either the old lady or the child, which would it choose? Is there a right answer?
When an AI-trained search engine learns what the user intent is when a person searches for “school girl,” should it continue to perpetuate the stereotype by serving sexualized images just because that’s probably what the person is looking for?
When we make decisions, we use principles to guide us. But how do you instill principles into an AI that are rules and data-driven?
We want AI to be unbiased, but there is always bias in the data. Even when it’s unintentional.
If an AI algorithm is trained on data showing bank loans given more often to men than to women, it will optimize to reinforce that bias. Even if the gender of new applicants is removed, the AI will still find some other correlation, and the bias will probably still be reinforced.
Business interests need to temper the desire for profit with the need for ethics. And somehow, AI tools need to build that into their optimizations.
The Weird Future of AI Ethics
As much as human ethics is appropriately at the forefront of discussions around AI, there are some weird ethical conversations to be had about AI rights too.
A former Google engineer, Blake Lemoine, claimed that Google’s Large Language Model, LaMDA, is sentient. While that may be a stretch, how far off are we from that reality? Should AIs be afforded more rights the closer they get to sentience?
Is it “right” to switch off or reformat the hard drive of an AI machine that demonstrates something approaching a personality?
Who owns the copyright of a poem, a book, or a movie script written by a generative AI?
For AI to truly benefit humanity, ethical considerations need to be considered early in the design phase of new AI tools. Not retrospectively as a response to embarrassing incidents.
And if we want to make sure that AI plays nice and doesn’t rise up against humans one day, then we’re probably going to need to address the potential rights of AI machines pretty soon too.