The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.
David Rozado, an AI researcher at New Zeland’s Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini.
Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy.”
LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.
The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.
“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information,” he says. “However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”
He also argues the shift in the sourcing of information has “profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society,” with the U.S. presidential election between the GOP’s Donald Trump and the Democrats’ Kamala Harris now just over two months away and expected to be close.
It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions.
Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet.