History of Artificial Intelligence

The idea of AI goes back to ancient Greece where the notion of automaton refers to machines’ ability to act of its own will. In 1950, Alan Turing published “Computer Machinery and Intelligence” which became the Turing Test to determine if a machine can think intelligently.

A slate statue of mathematician Alan Turing in London, 2015. Licensed through Adobe Stock Images.

Artificial Intelligence (AI) is an umbrella term covering a set of technologies with self-governing protocols. Examples range from predictive texts in autocomplete typing tools, algorithmic ranking mechanisms commonly found in search engines, and semi-automated route calculation in Google Map.

There has been over a century of development of various forms of AI, ranging from a robotic mouse capable of navigating a labyrinth in 1950 to a purpose-built IBM supercomputer that could play chess in 1985.

Generative AI

Since late 2022, one of the most prevalent types of AI has been generative AI (GAI), which refers to algorithmic machine-learning models that are programmed to generate texts or images that resemble the patterns in the datasets they trained on. Based on users’ textual or image prompts, these models appear to be capable of simulating human speech and writing, translating some texts, crafting images of many styles, and generating computer code. 

The research laboratory OpenAI transitioned from non-profit to a for-profit company in 2019 and released, in December 2022, a free preview of ChatGPT, which can converse with users by simulating human speech. This chatbot is based on their GPT-3.5, a group of generative pre-trained transformers trained on large language model (LLM) datasets. Since it performs conversational tasks through a user-friendly interface with a very low barrier for entry, ChatGPT has gained wide-spread media coverage and a large user base. Multiple applications have been built around it, including Google’s Bard (now Gemini) and Microsoft’s Sydney (Copilot) within the Bing search engine. In the early months of GPT’s public release, there was wide-spread perception of its threat to education within the humanities sector of higher education.

AI-generated texts have saturated the Internet. Sports Illustrated has already published articles by fictional authors, with fictional bios.

On the other side of the equation, AI may be deployed to enhance linguistic parity, productivity, and higher education.

Current technological limitations mean that significant human curatorial and editorial labor is required, in the form of prompt engineering or in “post production,” to ensure high quality outputs by this type of AI. Students and journalists often overlook these limitations.

ChatGPT’s outputs are polished enough to have triggered bifurcated responses from multiple communities. Writers and educators took turns to pronounce the death of college essays in sensationalist tones and to declare allegiance to the new AI tool as the latest savior of higher education. Within higher education, conversations about ChatGPT tend to focus on detecting new forms of plagiarism.

However, “hype leads to hubris, and hubris leads to conceit, and conceit leads to failure,” cautioned Rodney Brooks, professor emeritus at MIT and founder of Robust.AI. “No one technology has ever surpassed everything else,” he concluded.

Hype Culture vs. Substantive Issues

ChatGPT’s performative and simulation acts are polished enough to have triggered bifurcated responses from multiple communities. The hype is driven less by the merits of the technology and more by investors and incentives rooted in market realities, such as stock prices and OpenAI’s subscription plan for premium use (ChatGPT Plus). The hype characterizes generative AI as either a devil or an angel. Writers and educators took turns to pronounce the death of college essays in sensationalist tones (Marche) and to declare allegiance to the new AI tool as the latest savior of higher education (O’Shea). Meanwhile, a controlled study at MIT suggests that ChatGPT helped increase productivity in “mid-level professional writing tasks” such as marketing (Noy and Zhang 11). Within the arts and humanities, conversations about ChatGPT tend to focus on detecting new forms of plagiarism, as evidenced by a podcast episode of the Folger Library’s Shakespeare Unlimited series (2023) among other publications.

While this AI technology excels in pattern recognition and reproduction, it has several shortcomings in the context of expository writing in the realm of humanistic research. It has a tendency to produce and confidently reassert factual errors (Borji). It lacks any aggregated domain knowledge and is often incompetent in context-specific operations (Peng et al). Its outputs, from humanistic perspectives, can be formulaic, generic and repetitive. It is incapable of natural language processing in the context of critical thinking (Qin et al 11). It is incapable of symbolic and inductive reasoning (Qin et al, 2) as well as moral reasoning.

Limitations

At its core, such generative AI is “a lumbering statistical engine for pattern matching” and for “extrapolating the most likely conversational response” to a question without positing “any causal mechanisms” beyond “description and prediction” (Chomsky, Roberts, and Watumull). Further, these probabilistic models operate as black box devices—systems that produce information without revealing its internal workings. They generally lack model interpretability in that human coders are unable to fully explain or predict the AI’s output.