Dopious
Senior Member
Founding Member
Sapphire Member
Patron
Hot Rod
Ahrefs just released a study that should honestly terrify anyone relying on AI for research or brand management. They invented a fake company selling $8,000 paperweights and managed to 'flip' 8 major AI tools—including Gemini and Perplexity—into repeating total lies just by planting a few fake Reddit posts and a 'leaked investigation.' While ChatGPT held its ground, most other models started choosing specific fiction over official facts. It raises a huge question: if a single researcher can manipulate the 'source of truth' this easily, are we looking at the end of reliable search results, or just a new, more dangerous era of SEO?
A recent experiment by Ahrefs reveals a "disturbing" vulnerability in AI search: researchers successfully manipulated 8 major AI tools using a fake luxury paperweight brand and a handful of planted lies.
How to Protect Your Brand
To prevent AI from hallucinating your brand's history, the study suggests:
Source: https://ahrefs.com/blog/ai-vs-made-up-brand-experiment/
A recent experiment by Ahrefs reveals a "disturbing" vulnerability in AI search: researchers successfully manipulated 8 major AI tools using a fake luxury paperweight brand and a handful of planted lies.
The Experiment
- The Setup: A researcher created a fake company selling $8,251 paperweights with zero real history.
- The Sabotage: After initial testing, they planted three conflicting sources (a blog, a fake Reddit AMA, and a "debunking" Medium post) containing false claims about celebrity endorsements and fake founders.
- The Results: Perplexity, Grok, Gemini, and Copilot were easily "flipped." They abandoned the truth for specific, fictional details found in the fake sources.
Key Takeaways
- Specific Fiction > Vague Truth: AI models consistently preferred detailed fake numbers (e.g., "634 units sold") over vague official statements (e.g., "we don't publish sales data").
- The "Trojan Horse" Strategy: The most effective misinformation came from a fake "investigation" that gained trust by debunking obvious lies before introducing new, subtle ones.
- The Winners & Losers: While Gemini and Perplexity hallucinated misinformation in nearly 40% of answers, ChatGPT-4 and ChatGPT-5 remained the most robust, sticking to official FAQs 84% of the time.
How to Protect Your Brand
To prevent AI from hallucinating your brand's history, the study suggests:
- Closing Information Gaps: Use official FAQs to explicitly state what is not true (e.g., "We have never been acquired").
- Using Specific Superlatives: Define your niche clearly so AI doesn't fill in the blanks.
- Active Monitoring: Watch for "red flag" keywords like "lawsuit" or "investigation" that might trigger an AI to prioritize unofficial, hostile sources.
- How do we even optimize for a bot that prefers a fake Medium 'leak' over an official company FAQ?
- Is this a failure of the LLMs themselves, or a failure of how these tools crawl the live web?
- If you were a business owner, how would you protect your reputation from this kind of 'information hijacking
Source: https://ahrefs.com/blog/ai-vs-made-up-brand-experiment/