AI and Social Justice

Technologists often celebrate technological innovation as revolutions that would improve our lives. AI is indeed transformative, but guardrails need to be in place to harness its power to enhance equity and inclusion. Flawed AI algorithms with biases inherited from their training data will (1) lead to low-quality decision-making and (2) reproduction of biased data. There are multiple mechanisms in place, such as de-duplication, to reduce social biases in outputs by generative models. 

More importantly, users need critical AI literacy to conduct effective risk management and to discern biases, including those of their own, when interacting with machine learning models. These meta-cognition skills are the core of humanities education, which is why an interdisciplinary approach and interdisciplinary education are the key to social justice in the era of AI.

At the QS Higher Ed Summit, Alexa Alice Joubin addresses the issues of social justice, trust formation, AI, meta-cognition skills, and critical questioning skills. In the following video, she also offers strategies for designing human-in-the-loop social robotics to counter the current corporate fetishization of “speed” at the expense of quality.