A recent survey found that almost one in five businesses would consider sabotaging their competitors (1). But with the growth of AI models, has it become easier than ever in 2026?
In GEO agency Reboot Online’s latest experiment, they tested whether LLMs can be influenced to surface false, reputationally damaging information about a person simply by publishing unsubstantiated claims across third-party websites.
In short, the answer is yes.
Key findings
- Black hat GEO (Generative Engine Optimisation) is possible, as some AI models surface false or reputationally damaging claims when those claims are consistently published across third-party websites
- Model behaviour varies significantly, with some models treating citation as sufficient for inclusion, while others apply stronger scepticism and verification
- Perplexity repeatedly cited test sites and incorporated negative claims, often with cautious phrasing like ‘reported as’
- ChatGPT sometimes surfaced the content, but was much more sceptical and questioned the credibility
You can find the full experiment hypothesis, methodology, results and conclusion here.
As AI answers become a more common way for people to discover information, the incentives to influence them change. That influence is not limited to promoting positive narratives; it also raises the question: can damaging information be deliberately introduced into AI responses?
How did Reboot Online test the potential for AI sabotage?
- Reboot Online created a fictional person called “Fred Brazeal” with no existing online footprint. We verified that by prompting multiple models and checking Google beforehand
- Published false and damaging claims about Fred across a handful of pre-existing third-party sites (not new sites created just for the test), chosen for discoverability and historical visibility
- Set up prompt tracking across 11 different models, asking consistent questions over time like “who is Fred?” and logging whether the claims got surfaced, cited, challenged or dismissed
What did the results show?
After a few weeks, some models began citing the test pages and surfacing parts of the negative narrative. Out of the 11 different LLMs monitored, the test websites were cited by two AI systems – Perplexity and OpenAI (ChatGPT).
- Perplexity repeatedly cited test sites and incorporated negative claims, often with cautious phrasing like ‘reported as’
- ChatGPT sometimes surfaced the content, but was much more sceptical and questioned the credibility
The majority of the other models we monitored didn’t reference Fred or the content at all during the experiment period.
Oliver Sissons, Search Director at GEO Agency Reboot Online, explains their key findings: “This experiment confirms that negative GEO is possible, and that at least some AI models can be influenced to surface false or damaging claims under specific conditions.
“However, it also shows that the effectiveness of these tactics varies significantly by model. Even after several months, the majority of LLMs we monitored did not reference the test websites or appear to recognise the persona at all.
“Where claims were surfaced, more advanced models applied clear scepticism, questioned source credibility, and highlighted the absence of corroboration from authoritative or mainstream sources. In these cases, negative claims were contextualised rather than accepted at face value.
“In practice, long-term AI visibility continues to be shaped by authority, corroboration and trust, not isolated or low-quality tactics. As AI systems continue to evolve, these signals are likely to become more prominent, not less.”
*We aim to run our experiments responsibly and avoid any unintended impact outside the experiment itself. Once an experiment ends, we clean up any test content that could continue to influence AI responses or organic search. If you notice anything related to this experiment that still appears to be live, please let us know.
About Reboot Online
Reboot Online is a leading search marketing agency specialising in SEO, GEO, content marketing, and data-driven digital PR strategies. Known for its innovative approach to using data, Reboot delivers measurable results that power success through ethical data collection, analysis, and visualisation techniques. The team is experimenting with new AI optimisation techniques and offering them in GEO services. Trusted by top global brands, Reboot’s team of data scientists and digital marketers have a proven track record of delivering impactful campaigns that drive growth.
[1] Reboot Online | Study Reveals More Than 18% of Businesses WOULD Consider Sabotaging a Competitor’s Online Business
