You Can Easily Trick AI Chatbots Like ChatGPT And Gemini – All You Need Is A Blog

0
23

An online trend is rigging the answers of popular AI chatbots with shocking ease, challenging user trust in agentic systems. Dubbed generative engine optimization, or GEO, the fad utilizes blog posts to influence the answers of leading systems like ChatGPT and Gemini, sparking a growing marketing industry with major security implications. The influence campaigns, which garnered media scrutiny following reports by publications like the Wall Street Journal and BBC, manipulate how large language models (LLMs) supplement their training data. Taking advantage of the technology’s less-than-human capabilities of logic and source discernment, self-serving blog posts easily skew a chatbot’s answers to include false, dangerous, or manipulative content.
Experts are beginning to understand generative engine optimization as one of the many ways scammers use AI technology to manipulate users. Implications range from humorous to disastrous. For instance, one BBC reporter, Thomas Germain, deployed tech to cast himself as the journalism industry’s premier hotdog eating champion. But the consequences reach far beyond his recreational diet. Mass propaganda campaigns, economic manipulation, medical misinformation, and reputational slander are just a few potential malignant uses of generative engineering.
While similar practices have quietly manipulated search engine results for decades, experts believe that GEO poses a more fundamental threat to our informational sphere and points to broader questions about AI. As artificial intelligence becomes more ubiquitous, and its results increasingly relied upon to inform decisions, it’s critical that users can trust agents to deliver accurate, unbiased results. As it stands, whether or not you should trust your chatbot may boil down to a simple question: where does it get its information?
The base layer of an LLM’s information is its training module, which often includes over a petabyte of data. To supplement datasets, developers turn to search indexes of websites, particularly for niche subjects outside an LLMs’ verified source list. Colloquially known as data voids, queries that plume these informational gaps present a conundrum for a firm’s quality assurance filters, as chatbots often lack the requisite reference points to fact check less-conventional sources. As Nick Koudas, a professor at the University of Toronto, told The Wall Street Journal, these data structures mean AI is easily swayed by unverified search results when it lacks expertise .
The unique query problem has become increasingly urgent given the evolving use cases of agentic AI systems. According to Google’s AI team, LLMs are encouraging users to refine their searches to produce clearer results, paradoxically making results less certain by pushing agents into data voids more frequently. The trend has changed users’ search engine behavior, as Google has stated that roughly 15% of all searches in 2025 had never been done before.
These informational vacuums are being filled by less-reliable sources. A December 2025 study at AI marketing firm Ahrefs revealed that ChatGPT disproportionately turns to blog posts for its information. The study, which asked OpenAI’s chat bot for various recommendations, relied on blogs and online lists roughly 67% of the time, a third of which the researcher considered