AI and Social Justice

To cease to ask unanswerable questions would be to lose the capacity to ask all the answerable questions upon which every civilization is founded. --- Hannah Arendt

Generally the term social justice refers to justice, or a just and fair sytsem, in terms of the distribution of wealth, opportunities, and privileges within a society. American philosopher Martha Nussbaum suggests that a just society enables individuals to engage in activities that are essential to a truly “human” life—including, among others, the capabilities to live a life of normal length, to use one’s mind in ways “protected by guarantees of freedom of expression,” and to meaningfully participate in political decision-making. Social justice encompasses equity, inclusion, and self-determination for everyone but especially for “currently or historically oppressed, exploited, or marginalized populations” (Encyclopedia Britannica). 

Computational technologies currently play a key role in oppression, but they also have great potential to enhance social justice causes. Technologists often celebrate technological innovation as revolutions that would improve our lives. AI is indeed transformative, but guardrails need to be in place to harness its power to enhance equity and inclusion. Flawed AI algorithms with biases inherited from their training data will

  • lead to low-quality decision-making and
  • reproduce biased data.

There are multiple mechanisms in place, such as de-duplication, to reduce social biases in outputs by generative models, but there is no perfect solution. 

More importantly, users need critical AI literacy to conduct effective risk management and to discern biases, including those of their own, when interacting with machine learning models. These meta-cognition skills are the core of humanities education, which is why an interdisciplinary approach and interdisciplinary education are the key to social justice in the era of AI.

Case Study: Digital Justice

It is socially meaningful to study and amplify previously marginalized voices. However, there are privacy concerns. How do researchers make archival material available to the public without infringing on the privacy of historical figures? AI could anonymize sensitive data while preserving the usefulness of said data. AI could also recognize patterns in handwriting to help historians answer questions of provenance. 

AI has helped some scholars open up archives “while ensuring privacy concerns are respected,” such as the project “The Personal Writes the Political: Rendering Black Lives Legible Through the Application of Machine Learning to Anti-Apartheid Solidarity Letters.” Funded by the American Council of Learned Societies (ACLS), the research team uses “machine learning models to identify relationships, recognize handwriting, and redact sensitive information from about 700 letters written by family members of imprisoned anti-apartheid activists” (see their interview here).

Case Study: Higher Education Contexts

At the QS Higher Ed Summit, Alexa Alice Joubin addresses the issues of social justice, trust formation, AI, meta-cognition skills, and critical questioning skills. In the following video, she also offers strategies for designing human-in-the-loop social robotics to counter the current corporate fetishization of “speed” at the expense of quality.

Our pursuit of social justice will be enhanced by critical AI literacy, and it will be obstructed by the lack thereof. There are a great deal of (often) unsubstantiated claims about technologies’ potential to “democratize” everything. Imagination, as Meredith Broussard reminds us, “sometimes confuses the way we talk about computers, data, and technology” (39). 

It is important to distinguish between general AI as Hollywood imagines it and “narrow” AI. The former involves the likes of benevolent or malevolent sentient humanoids or God-like machines that “think.” The latter, “statistics on steroids,” is “a mathematical method for prediction,” producing “the most likely answer to any question that can be answered with a number” (Broussard 32).

Some Western societies fetishize the unproven merit of numbers. Numbers can and have lied. Numbers alone never tell the full story. We need quantitative and qualitative approaches to discerning problems and solving problems. 

Part of the problem in terms of equity and inclusion is digital disparity. Not everything is digital or digitized. Only a very small portion of the collective, historical human expressions and experiences are digitally represented and accessible. Think oral culture, un-codified emotions, gestures, and minor languages. Only a very small number of the 7000 languages globally are represented digitally. Mongolian, for example, is a script that digital software does not recognize and cannot process (you cannot send a text message in Mongolian). Mongolian is the only living language that is not digitally searchable.

The problem is exacerbated by unstructured data abundance despite the proliferation of indexical portals and discursive generative AI tools. This leads to data paucity (plenty of data but not easily accessible), according to Mona Diab. Information retrieval becomes a challenge.

Ranking algorithms become a double-edged sword. On one hand, systems at scale (databases, libraries) require annotated resources and methods of categorization. On the other hand, ranking algorithms often give the false impressions of one, singular, correct way of knowing the world. 

In the context of generative AI as a probabilistic model, it promotes social “medians” in its datasets. Presenting the average of the sum is quite different from giving the full picture. AI’s discursive outputs therefore become a form of data throttling. AI is an opaque “black box tech,” producing infomation without revealing its internal workings. 

AI could also inadvertently be perceived as the ultimate standard in writing and thereby normalize what is known as “white” English. This tendency may further marginalize other styles and forms of English, such as the African American Vernacular English (AAVE). 

Another important aspect of social justice is capitalist exploitation. Given the resources needed to build and operate LLMs, currently there are only a few viable options and most of them are controlled by American companies. For example, OpenAI started as a nonprofit, but has, since the release of ChatGPT-3, adopted the traditional corporate structure.

Working to address these concerns, Kyutai, a privately funded nonprofit working on artificial general intelligence, is building an open source large language model. Supported by the philanthropist Xavier Niel, Kyutai plans to release not only open source models, “but also the training source code and data” which is the key difference between Kyutai and Meta’s Mistral AI (which is an source foundational model). 

Your Turn

Enumerate, and analyze through critical AI theory, some biases associated with generative AI’s outputs or algorithmic technologies. 

Watch the PBS’s 90-minute documentary Coded Bias (trailer here) which was made by M.I.T. Media Lab computer scientist Joy Buolamwini: “In an increasingly data-driven, automated world, the question of how to protect individuals’ civil liberties in the face of artificial intelligence looms larger by the day.”

In the following video, Joy Buolamwini, founder of the Algorithmic Justice League, gave a presentation at the Bloomberg Equality Summit in New York on how to fight the discrimination within algorithms now prevalent across all spheres of daily life. 

Well-intended guardrails may not always worked as intended, either. Melissa Warr’s research points out that while “OpenAI has intentionally guard railed against responding in a biased manner if race is explicitly mentioned,” its AI remains “racist” in subtle ways. ChatGPT 3.5 gave higher scores to a student’s work “if a student was described as Black, but lower scores to a student who attended an inner-city school.” The term inner-city school is often associated specifically with Black urban neighborhoods. Instead of saying Black, the words “inner-city school” operates as an indirect indicator of racial difference. 

Your Turn: Black Panther, Wakanda Forever

Analyze the following scene about AI in the Marvel science fiction film Black Panther: Wakanda Forever, directed by Ryan Coogler in 2022. Using the trope of Afrofuturism, the superhero film depicts how people of Wakanda fight to protect their home from intervening world powers as they mourn the death of King T’Challa. 

Wakanda’s lead scientist Shuri designs an AI to help her synehtically create a “heart-shaped herb” to cure illnesses.

In this scene, as she is working in her lab, her mother Queen Ramanda walks in on her, saying that “one day artificial intelligence is going to kill us all.” Shuri responds confidently that, with full dramatic irony, “my AI isn’t the same as the movies. It does exactly what I tell it to do” (dialogue at 00:18).

Conclusion

AI re-animates classical philosophical and theological questions. Philosophy has now gone mainstream. We cannot think about technology without thinking about human-centered enterprises and social justice (the impact of technology on individuals). Conversations about technologies now focus on these so-called eternal questions, such as:

  • Do humans have free will? Should machines have moral agency?
  • What makes us human?
  • Are technologies an extension of humanity or a surrogate of it?

These topics were previously regarded as trivial. AI compels us to ask these urgent and highly relevant questions. However, we do have to be careful about technological solutionism, the misconception that technology alone can solve every social problem. Technological solutionism is a bias that assumes one can turn philosophical problems become engineering ones. 

Rather than over-emphasizing AI as a miraculous machine that pitches humans against machines, we should understand AI as a product designed and used by humans.

Further Reading

Some of these readings are open access; others can only be accessed using George Washington University credentials.

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World (MIT Press, 2018).

Voeneky, Silja, Philipp Kellmeyer,  Oliver Mueller, and Wolfram Burgard, eds. The Cambridge Handbook of Responsible Artificial Intelligence (Cambridge University Press, 2022).

Warr, Melissa. “Racist, or Just Biased?” Design. Creativity. Technology. Education, May 31, 2024.