Trustworthy AI

Multilingual AI Teaching Assistant Trained on the Content of this Website.

This website is a beta-test of trustworthy, multilingual, generative artificial intelligence (AI) in higher education. Trustworthiness is defined according to the National Institute of Standards and Technology (NSIT) in terms of a model’s explainability, interpretability, accountability, and transparency

Explainability:  The AI tutor’s operation is explainable through the Retrieval-Augmented Generation (RAG) method. It draws answers from within the crawled data of this website as a pre-set boundary. Further, the AI tutor’s operation is interpretable through iterations of custom prompts in the background and custom data sets (this website’s content).

Interpretability:    The embedded chat bubble interface is programmed in HTML code. The chatbot’s operational information and training is managed using Vercel database. When users interact with the AI tutor chatbot, the server hosting this site makes calls to OpenAI to facilitate the chatbot interactions. The same serve also hosts the Next.js application for the chatbot interface. 

Transparency:   Akhilesh Rangani used JavaScript, Psima, TypeScript, and Next.js framework to design the trustworthy AI features for this website. The model was trained on a pre-defined dataset containing the content of this online textbook and utilizes OpenAI’s APIs to craft responses.

The development of the multilingual AI Teaching Assistant has also benefited from the work of Ananya Lal as a beta-tester.

Ethics:    This project emphasizes AI and social justice. The use of AI on this site, from search algorithms and AI reader to AI tutor, models best practices in academic pursuits. Our system does not store user data. When users interact with the AI tutor on this site, the messages shared between the chatbot and the user are deleted as soon as the user closes the chatbot screen. 

 

Alexa Alice Joubin at the stakeholders' meeting, GW Trustworthy AI Initiative. Photo credit: Alexa Alice Joubin

Areas for Further Investigation:      How does trust, or the lack thereof, affect learning outcomes? How does trust among human agents form? How does trust form between human agents and tools? How might educators mange over and under trust? 

It is important to include humanistic perspectives in current debates about generative AI, because the humanities offer effective tools to examine trust and society’s relationship with technology. Maintaining scholarly integrity benefits the public and upholds research trustworthiness.

This website exemplifies public interest technology (PIT). Generative artificial intelligence (AI) is one of the most significant forms of public interest technology. Driven by machine learning models, these text-and-image-generating mechanisms impact all sectors of our society. Algorithm-governed inquiries and responses frame our contemporary life from navigation to higher education.

As the founding co-director of the GW Digital Humanities Institute, leader of several large-scale public interest technology (PIT) projects, the PI and co-PI of several external PIT grants, as well as the inaugural Public Interest Technology Scholar, Alexa Alice Joubin has worked closely with colleagues and students across STEM, humanities, and interpretive social sciences on advancing the fields of humanistic AI, data justice, bias detection, digital cultures’ relationships to disability cultures, and empowering minority students through public interest technology. 

Finch (played by Tom Hanks), the sole post-apocalyptic survivor, builds a humanoid robot to keep him company in Miguel Sapochnik's film Finch (2021). They develop a bond.