The latest version of ChatGPT has begun citing Grokipedia, an AI-generated online encyclopedia backed by Elon Musk’s xAI, as a source across a range of factual queries, raising concerns among researchers about misinformation entering large language models, according to reporting by the Guardian.
In testing conducted by the newspaper, ChatGPT’s GPT-5.2 model cited Grokipedia nine times in response to more than a dozen prompts. The citations appeared in answers to questions about Iran’s political and economic structures, including the salaries of the Basij paramilitary force and the ownership of the Mostazafan Foundation. The model also referenced Grokipedia in biographical queries about Sir Richard Evans, a British historian who served as an expert witness against Holocaust denier David Irving in a high-profile libel case.
Grokipedia launched in October as an AI-written encyclopedia positioned as a competitor to Wikipedia. Unlike Wikipedia, it does not allow direct human editing. Content is generated by an AI model, with revisions handled through automated processes. The project has drawn criticism for promoting rightwing narratives on topics such as same-sex marriage and the January 6 attack on the US Capitol.
Selective Citation and Subtle Influence
ChatGPT did not cite Grokipedia when prompted directly to repeat well-known false claims related to the Capitol attack, media bias against Donald Trump, or misinformation about HIV and Aids. Instead, Grokipedia appeared in responses to more obscure or technical topics, where external scrutiny is typically lower.
In one example cited by the Guardian, ChatGPT repeated stronger claims about the Iranian government’s links to telecom operator MTN-Irancell than those found on Wikipedia, including assertions about ties to the office of Iran’s supreme leader. In another case, the model cited Grokipedia when repeating information about Evans’ role in the Irving trial that the Guardian has previously debunked.
GPT-5.2 is not the only model to reference Grokipedia. Researchers have anecdotally observed Anthropic’s Claude citing the encyclopedia on topics ranging from petroleum production to regional food and drink.
An OpenAI spokesperson said the company’s web search system is designed to draw from a broad range of publicly available sources and viewpoints. The spokesperson added that OpenAI applies safety filters to reduce the risk of linking to sources associated with high-severity harms and continues to run programs aimed at filtering low-credibility information and coordinated influence campaigns.
Anthropic declined to comment. xAI, which owns Grokipedia, responded to a request for comment with the statement: “Legacy media lies.”
Risks of LLM Grooming
Disinformation researchers say the appearance of Grokipedia in chatbot citations highlights a broader risk known as “LLM grooming,” in which actors flood the internet with misleading content in an effort to influence AI training data. Security researchers warned last year that state-linked propaganda networks, including those tied to Russia, were attempting similar tactics.
Nina Jankowicz, a disinformation researcher who has studied LLM grooming, said Grokipedia entries she reviewed relied on sources that were poorly vetted or actively misleading. She warned that being cited by AI systems could increase the perceived legitimacy of such sources, encouraging users to trust and revisit them.
Once false information is incorporated into AI-generated answers, correcting it can be difficult. Even when original sources remove errors, models may continue repeating them. Researchers say this persistence underscores the challenge of maintaining accuracy as AI systems increasingly rely on automated aggregation of online information.