An Overview of the Corrosive AI Project


The term “artificial intelligence” (AI) has risen to prominence in recent years, particularly through its use in works of science fiction, and with the advent of publicly accessible AI software like ChatGPT. However, the everyday use of the term “AI” is imprecise. While there are many ways to define artificial intelligence, among the most prominent within the tech sector is John McCarthy’s definition of AI as “…the science and engineering of making intelligent machines, especially intelligent computer programs….”[1] Generative artificial intelligence is a subtype of AI which uses deep learning models to generate original content. Generative AI is “trained” on a collection of data, which it then uses to “…generate statistically probable outputs when prompted.”[2] With recent advances in this technology, these generated outputs can take many forms: text, images, audio, and even video. One type of generative AI is particularly important to this paper: deepfakes. Deepfakes are a type of media which use generative AI to imitate the likeness of an individual, with the ability to create video facsimiles of facial features and voices. In short, deepfakes make it possible to generate videos of individuals doing things they never did, or saying things they didn’t say.

It is important to note that much of the current research on AI in the social sciences tends to ascribe agency to AI, with predictions about how AI may or may not affect various aspects international political, economic, or social structures. These studies, whether intentionally or not, are often rooted in technological determinism, or the idea that technological development is to some degree autonomous. A deterministic approach would hold that the development of technology shapes human society, not the other way around. While popular depictions of AI in science fiction often have agency (think HAL 9000), the reality in our world is that AI has not reached that point. Generative AI is a tool that requires input, and the effects which it has on political, economic, and social structures are largely determined by how people choose to use it.[3]

Broadly speaking, the paper I’m currently working on offers an examination of the relationship between the use of generative AI and public trust in government. More specifically, this paper is concerned with political trust. Political trust is a fluid concept, best understood as an evaluation of government action against people’s expectations of their government. Trust is formed over time through iterated interaction. If government entities fail to meet expectations, trust is lost. Critically, research shows that political scandals to have a significant negative impact on political trust.[4]

The core claim of this paper is that widespread access to generative AI will not erode public trust over time, but rather will corrode it quickly. Widespread access to generative AI has the potential to allow for the generation of political scandals. With software capable of generating deepfakes widely available, the ability to create a sandal by falsifying a video or audio recording of a given political actor is now in the hands of millions of people. These AI generated videos or recordings do not need to be true to induce a scandal, they only need to be believed. AI may also erode trust in the long-term, as citizens could lose confidence in the authenticity of video and audio content featuring their elected representatives. This loss of trust could have broad policy implications, as political trust has been shown to be an important determinate of foreign and domestic policy.


[1] McCarthy, 2007

[2] IBM, 2023

[3] See Aspray & Doty (2023)

[4] Keele (2007)


Leave a comment