AI and Ethics

Licensed from Adobe Stock Images.

Defining Ethics

Ethics refer to a human agent’s willingness to listen to and be subjected to the demands of others. Ethical acts are based on accounting for the polyphony of voices – including voices once obscured by history.

In the West, sometimes ethical considerations are bound up with utilitarianism, a moral reasoning approach that focuses on the consequences of actions (cost-benefit analysis). In the context of technology, utilitarianism is the idea that something is ethical if its benefits outweigh its disadvantages.

AI and automation may separate us from meaningful work. When human agents no longer understand or engage with processes. They tend to become disconnected when they focus only on deliverables and outputs. A human-centered approach would be the first step to ensuring fairness, equity, and diversity in addressing ethical concerns. 

Generative AI tools complicate the algorithm- and inquiry-driven culture we live in. Algorithm-governed inquiries and responses frame our contemporary life from navigation to scholarly research. One of the most notable features of this type of technology is the natural language interface. This has led to hyperbolic reactions that anthropomorphize the technology using such words as the AI is “hallucinating,” “learning,” or “declaring love” in reference to ChatGPT and the AI-powered Microsoft Bing (Roose) while neglecting the fact that queries and prompts are themselves new data points to be analyzed. It is more scientifically prudent and meaningful to treat generative AI as what it is: a machine designed to accomplish limited and specific tasks. A more accurate and nuanced description of ChatGPT is that it is an “aesthetic instrument” rather than an instrument of reason or an “epistemological” tool (Bogost). It is a simulacrum machine, a mechanism of synthesizing and simulating social discourses.

Case Study

Here is a mini-documentary by two students who interviewed Prof. Joubin and other students on the topic of AI and Ethics. As you watch it, please draft your own answers to the questions discussed in the video.

AI guardrails are safety mechanisms offering guidelines and boundaries to ensure that AI applications are being developed and aligned to meet ethical standards and societal expectations. Questions of ethics regarding AI resonate with similar questions in other areas, such as equity and fairness.

There are, however, concerns that are unique to machine learning and generative artificial intelligence. The United Nations Educational, Scientific and Cultural Organization (UNESCO), for instance, emphasizes these ethical principals regarding the design and deployment of AI models:

  • transparency
  • explainability
  • responsibility (human agents should be responsible)
  • accountability
  • multi-stakholder collaboration
  • literacy

Read more by visiting UNESCO’s published recommendations page here.

 

Your Turn

Let us put what we have learned into practice by drafting our own policy on AI ethics and by critiquing a real-world policy. Identify one of the policy documents that interest you in the University of North Texas’ Artificial Intelligence (AI) Policy Collection, such as:

After reading your chosen document, what do you have to add to it? Are there areas that could be further fleshed out? How might you write a policy differently? Which domain are you most interested in (such as education, medicine, law, art, journalism, etc.)?

 

Further Reading