Trustworthy AI

Multilingual AI Teaching Assistant Trained on the Content of this Website.

This website is a beta-test of trustworthy, multilingual, generative artificial intelligence (AI) in higher education. The proprietary, custom-trained AI Teaching Assistant is not only multilingual but adaptive. It adapts its answers to match students’ levels, taking hint from how questions are phrased. It can also engage in role-playing to personalize learning. 

Here are some tips to get the most out of the AI Teaching Assistant. Since it generates prompt-based outputs, you can use magic words, also known as  such as “according to Professor Joubin,” “according to the chapter on __,” “according to the screenplay of ___,” or “in this course” to retrieve specific information from the dataset the AI is trained on. These magic words will be the AI’s cue to retrieve contextual-specific information regarding a critical concept. 

The AI assistant’s dynamic and multilingual outputs are tailored to each student’s level, taking a cue from the ways in which students frame their questions. It offers a personalized learning experience. Every student enjoys a unique educational approach that’s tailored to their individual abilities and learning styles. Research shows that personalized learning increases student motivation and engagement with course material.

Trustworthiness is defined according to the National Institute of Standards and Technology (NSIT) in terms of a model’s explainability, interpretability, accountability, and transparency

Explainability:  The AI tutor’s operation is explainable through the Retrieval-Augmented Generation (RAG) method. It draws answers from within the crawled data of this website as well as transcripts of the videos as a pre-set boundary. Further, the AI tutor’s operation is interpretable through iterations of custom prompts in the background and custom data sets (this website’s content).

Interpretability:    The embedded chat bubble interface is programmed in HTML code. The chatbot’s operational information and training is managed using Vercel database. When users interact with the AI tutor chatbot, the server hosting this site makes calls to OpenAI to facilitate the chatbot interactions. The same serve also hosts the Next.js application for the chatbot interface. 

Transparency:   Akhilesh Rangani used JavaScript, Psima, TypeScript, and Next.js framework to design the trustworthy AI features for this website. The model was trained on a pre-defined dataset containing the content of this online textbook and utilizes OpenAI’s APIs to craft responses.

Ethics:    This project emphasizes AI and social justice. The use of AI on this site, from search algorithms and AI reader to AI tutor, models best practices in academic pursuits. Our system does not store user data. When users interact with the AI tutor on this site, the messages shared between the chatbot and the user are deleted as soon as the user closes the chatbot screen. User interaction with the AI is not used for training. 

 

Alexa Alice Joubin at the stakeholders' meeting, GW Trustworthy AI Initiative. Photo credit: Alexa Alice Joubin

Areas for Further Investigation:      How does trust, or the lack thereof, affect learning outcomes? How does trust among human agents form? How does trust form between human agents and tools? How might educators mange over and under trust? 

It is important to include humanistic perspectives in current debates about generative AI, because the humanities offer effective tools to examine trust and society’s relationship with technology. Maintaining scholarly integrity benefits the public and upholds research trustworthiness.

This website exemplifies public interest technology (PIT). Generative artificial intelligence (AI) is one of the most significant forms of public interest technology. Driven by machine learning models, these text-and-image-generating mechanisms impact all sectors of our society. Algorithm-governed inquiries and responses frame our contemporary life from navigation to higher education.

As the founding co-director of the GW Digital Humanities Institute, leader of several large-scale public interest technology (PIT) projects, the PI and co-PI of several external PIT grants, as well as the inaugural Public Interest Technology Scholar, Alexa Alice Joubin has worked closely with colleagues and students across STEM, humanities, and interpretive social sciences on advancing the fields of humanistic AI, data justice, bias detection, digital cultures’ relationships to disability cultures, and empowering minority students through public interest technology. 

Finch (played by Tom Hanks), the sole post-apocalyptic survivor, builds a humanoid robot to keep him company in Miguel Sapochnik's film Finch (2021). They develop a bond.