What is Corrosive AI?
The image you see above is AI generated, and to many with a passing familiarity with the technology, that fact may appear rather obvious. To many Americans, it is likely that dramatic and complex images like this are what come to mind when “generative AI” is mentioned. The problem is that this conception of what AI generated images look like is already outdated.
The issue here seems to be that AI technology is developing more quickly than many people can keep up with. According to a recent Pew Research publication, only 14% of U.S. adults have tried using ChatGPT at least once. Furthermore 42% of U.S. adults indicated that they’d never heard of ChatGPT, likely the most prolific of the new wave of easily accessible AI tools. This apparent lack of familiarity may indicate a larger, and more problematic trend: many adults in the U.S. are simply not familiar with what AI is, or what it can now do.
If people aren’t familiar with AI, it is likely they won’t be able to identify AI-generated content in news stories, or when posted on social media. If the widely held conception of AI-generated content looks something like the image above, can we expect people to be able to identify a deepfake video, or an AI-generated audio recording?
This brings us to Corrosive AI, the idea at the center of my research. A widespread lack of ability to reliable identify AI-generated content, combined with widespread and easy access to the technology, is likely to lead to a crisis of trust. With disinformation already rampant in politics, I believe that the injection of generative AI is likely to quickly corrode public trust in government.
The concept of corrosive AI highlights the fact that it is not AI itself that directly damages our political trust, but rather the way it is misused. Generative AI, which can generate realistic images, videos, and even mimic voices, has been increasingly employed to spread disinformation. Disinformation is intentionally false or misleading information designed to deceive people. The use of generative AI makes it easier to create and share disinformation, as it can generate convincing content that appears authentic.
The harmful impact of corrosive AI is rooted in three main factors. First, the business models adopted by major AI players prioritize easy user access and rapid deployment of updates, which can lead to insufficient testing and a higher potential for misuse. Second, the misuse of generative AI can damage trust in the authenticity of video content, which is a primary medium for political information and is generally perceived as a true representation of reality. Lastly, generative AI enables the creation and empowerment of disinformation, which can fabricate scandals and mislead people, ultimately eroding trust in political figures and institutions.
Contemporary research on disinformation often refers to disinformation “eroding” public trust in government. I believe “erosion” to be an inappropriate term when discussing disinformation empowered by generative AI. Erosion implies a long-term process. Instead, I expect to find through my research that generative AI will corrode trust far more quickly.
My ongoing project, entitled “Corrosive AI: How the use of Generative Artificial Intelligence Threatens Trust in Government” seeks to understand the effect which widespread access to generative AI will have on public trust in government. Among the most important sources of data for this project are interviews conducted with experts across numerous related fields, such as AI development, tech regulation, information science, and international politics. If you believe you can contribute to this project via an interview, please reach out via the contact page!
Updates on this project will be regularly published on this website. For any additional information or questions, feel free to contact me!
Pew Research: “A majority of Americans have heard of ChatGPT, but few have tried it themselves”
About Riley Lankes

Riley Lankes (he/they) is political information researcher focused on AI issues. Riley has always been interested in the intersection of international relations and information science, and has contributed to a number of research projects and working groups across these fields. Most recently, Riley served as the Policy Researcher for the State Libraries and AI Technology (SLAAIT) working group, leading SLAAIT efforts to understand the policy challenges and opportunities which AI presents state governments and public library systems.
Riley currently holds a position as an Investigations Analyst and Trainer at Vaco, working on behalf of Google Trust & Safety. His work allows him to dive into political information issues from a new perspective, tracking foreign influence operations and AI-powered misinformation across Google platforms.
Riley holds a BA in International Studies from the University of South Carolina, and an MA in International Relations from University College Dublin.