AI and Ethics

A human in a maze. The image is NOT generated by AI. Licensed from Adobe Stock Images.

Defining Ethics

Ethics refer to a human agent’s willingness to listen to and be subjected to the demands of others. Ethical acts are based on accounting for the polyphony of voices – including voices once obscured by history.

In the West, sometimes ethical considerations are bound up with utilitarianism, a moral reasoning approach that focuses on the consequences of actions (cost-benefit analysis). In the context of technology, utilitarianism is the idea that something is ethical if its benefits outweigh its disadvantages.

Tools and machines are inanimate objects and do not possess ethical agency. However, the design, deployment and usage of tools are human actions and should be scrutinized for their ethical implications.

We should use tools ethically. For example, this website makes a point of not using AI-generated images except for very rare instances to demonstrate the harm caused by AI images. This is part of Professor Joubin’s ethical principle. Since AI models cannot acknowledge and give credit to its sources, AI-generated images are unethical. Stock photography company Getty Images sued Stable Diffusion for copyright infringement. Artists developed a new tool to add invisible watermarks to protect their work from being used to train AI models. 

AI and automation may separate us from meaningful work when human agents no longer understand or engage with processes. Users of technologies tend to become disconnected from society and ethical responsibilities when they focus only on deliverables and outputs. A human-centered approach would be the first step to ensuring fairness, equity, and diversity in addressing ethical concerns. 

Generative AI tools complicate the algorithm- and inquiry-driven culture we live in. Algorithm-governed inquiries and responses frame our contemporary life, from navigation to scholarly research. One of the most notable features of this type of technology is its natural language interface. This has led to hyperbolic reactions that anthropomorphize the technology using such words as the AI is “hallucinating,” “learning,” or “declaring love” in reference to ChatGPT and the AI-powered Microsoft Bing (Roose) while neglecting the fact that queries and prompts are themselves new data points to be analyzed. It is more scientifically prudent and meaningful to treat generative AI as what it is: a machine designed to accomplish limited and specific tasks. A more accurate and nuanced description of ChatGPT is that it is an “aesthetic instrument” rather than an instrument of reason or an “epistemological” tool (Bogost). It is a simulacrum machine, a mechanism of synthesizing and simulating social discourses.

Case Study

Here is a mini-documentary by two students who interviewed Prof. Joubin and other students on the topic of AI and Ethics. The interviews took place in 2024. As you watch it, please draft your own answers to the questions discussed in the video, particularly the questions about ethics. 

AI guardrails are safety mechanisms offering guidelines and boundaries to ensure that AI applications are being developed and aligned to meet ethical standards and societal expectations. Questions of ethics regarding AI resonate with similar questions in other areas, such as equity and fairness.

There are, however, concerns that are unique to machine learning and generative artificial intelligence. The United Nations Educational, Scientific and Cultural Organization (UNESCO), for instance, emphasizes these ethical principals regarding the design and deployment of AI models:

  • transparency
  • explainability
  • responsibility (human agents should be responsible)
  • accountability
  • multi-stakeholder collaboration
  • literacy

Read more by visiting UNESCO’s published recommendations page here.

Ethics and AI is a key theme explored by many screen works, such as Westworld, an HBO series co-directed by Lisa Joy and Jonathan Nolan (2016-2022). Set in the 2050s, the TV show is set in the future but reimagines and appropriates the past. The series revolves around the interaction between human tourists and AI-driven biomechanical robots that are indistinguishable from humans in an American Old West-themed park called Westworld. These AI androids are programmed to fulfill the guests’ every desire. Dr. Robert Ford (played by Anthony Hopkins) is the God-like inventor figure who builds and controls these androids. AI produces an environment of simulation in which human fantasies could be played out seemingly without moral consequences. 

The series addresses the ethical problem of abusing sentient AI and the question of consciousness. The following 4-minute featurette by Warner Bros., entitled Westworld, Behind the Scenes: The Reality of A.I., takes viewers behind the scene. Directors Joy and Nolan point out that the “savages” in the theme park are actually the human tourists. The AI simulation brings out the worst in humans.

Your Turn

Let us put what we have learned into practice by drafting our own policy on AI ethics and by critiquing a real-world policy. Identify one of the policy documents that interest you in the University of North Texas’ Artificial Intelligence (AI) Policy Collection, such as:

After reading your chosen document, what do you have to add to it? Are there areas that could be further fleshed out? How might you write a policy differently? Which domain are you most interested in (such as education, medicine, law, art, journalism, etc.)?

Your Turn Again: Privacy and Ethics

Use critical AI theory to analyze the following scene on ethics in Ghost in the Shell

Privacy concerns are the core of ethical questions about AI data practices. High-quality data is the gold mine in the era of AI, because machine learning depends heavily on vetted datasets.

Privacy is defined as an individual’s right to control access to their personal information and body, including how others see, touch, or obtain information about them. In data science, privacy may involve the anonymization of particular information, or non-personally identifiable data that cannot be used on its own to identify a person.

Confidentiality refers to the agreement, about the handling of private information, between two agents (individuals, organizations, individual and organization, etc.). It involves agreed-upon manners in which a piece of private information is managed or shared. 

Privacy and confidentiality become a complex terrain when humans form symbiotic relationships with non-human agents such as implants, brain-computer interface, and prostheses (as is already the case today). These are the themes explored by Ghost in the Shell (dir. Rupert Sanders, 2017) which follows Mira Killian’s transformation from a human whose body could not be saved after a crash into a “ghost” (soul) inside a cyber-enhanced body. It is based on a serialized Japanese manga 攻殻機動隊 (1989-1990) and Mamoru Oshii’s 1995 Japanese anime neo-noir cyberpunk film.

Privacy and ethics are the focus of the following scene, particularly at 01:08, Dr. Ouelet repairs Mira’s broken robotic arm. Dr. Ouelet tells Mira that “I can see everything: all of your thoughts, your decisions.” Shocked, Mira asks rhetorically, “I guess privacy is just for humans?”

Which props and cinematographic techniques are used to heighten the tension between the human doctor and the “ghost” (human mind) in a cyborg body? How does the scene visualize the invasion of privacy?

Pay attention to blocking, the relative position of each character (in relation to each other and to the props). Regarding the props, think about objects that seem to denote power (power to heal, power to control). Does the scene’s set-up remind you of being in a doctor’s office?

Further Reading

Artificial Intelligence (AI) Policy Collection, University of North Texas

Coeckelbergh, Mark. AI Ethics. MIT Press, 2020.

Floridi, Luciano. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (Oxford University Press, 2023).