AI News Find out how easy it is to "brainwash" Perplexity and Gemini. Is AI search even usable?

Dopious

Senior Member
Founding Member
Sapphire Member
Patron
Hot Rod
Silver Star Silver Star Silver Star Silver Star Silver Star
Joined
Apr 5, 2025
Messages
2,154
Reaction Score
6,418
Feedback
4 / 0 / 0
Ahrefs just released a study that should honestly terrify anyone relying on AI for research or brand management. They invented a fake company selling $8,000 paperweights and managed to 'flip' 8 major AI tools—including Gemini and Perplexity—into repeating total lies just by planting a few fake Reddit posts and a 'leaked investigation.' While ChatGPT held its ground, most other models started choosing specific fiction over official facts. It raises a huge question: if a single researcher can manipulate the 'source of truth' this easily, are we looking at the end of reliable search results, or just a new, more dangerous era of SEO?

A recent experiment by Ahrefs reveals a "disturbing" vulnerability in AI search: researchers successfully manipulated 8 major AI tools using a fake luxury paperweight brand and a handful of planted lies.

The Experiment

  • The Setup: A researcher created a fake company selling $8,251 paperweights with zero real history.
  • The Sabotage: After initial testing, they planted three conflicting sources (a blog, a fake Reddit AMA, and a "debunking" Medium post) containing false claims about celebrity endorsements and fake founders.
  • The Results: Perplexity, Grok, Gemini, and Copilot were easily "flipped." They abandoned the truth for specific, fictional details found in the fake sources.

Key Takeaways

  • Specific Fiction > Vague Truth: AI models consistently preferred detailed fake numbers (e.g., "634 units sold") over vague official statements (e.g., "we don't publish sales data").
  • The "Trojan Horse" Strategy: The most effective misinformation came from a fake "investigation" that gained trust by debunking obvious lies before introducing new, subtle ones.
  • The Winners & Losers: While Gemini and Perplexity hallucinated misinformation in nearly 40% of answers, ChatGPT-4 and ChatGPT-5 remained the most robust, sticking to official FAQs 84% of the time.

How to Protect Your Brand
To prevent AI from hallucinating your brand's history, the study suggests:

  1. Closing Information Gaps: Use official FAQs to explicitly state what is not true (e.g., "We have never been acquired").
  2. Using Specific Superlatives: Define your niche clearly so AI doesn't fill in the blanks.
  3. Active Monitoring: Watch for "red flag" keywords like "lawsuit" or "investigation" that might trigger an AI to prioritize unofficial, hostile sources.
So...
  • How do we even optimize for a bot that prefers a fake Medium 'leak' over an official company FAQ?
  • Is this a failure of the LLMs themselves, or a failure of how these tools crawl the live web?
  • If you were a business owner, how would you protect your reputation from this kind of 'information hijacking

Source: https://ahrefs.com/blog/ai-vs-made-up-brand-experiment/
 
Nice share.

I don't think this is anything new. It's been happening with Search, ppc, paper adverts and so on.

It's just llms are a new victim of it.
 
Yeah, I’ve heard that AI search engines can be manipulated as well. It’s crazy to think how some people simply rely on them without ever questioning it. I think that for certain topics, it’s still important to do your own research, even if you’re lazy as fuck.
 
Yeah, I’ve heard that AI search engines can be manipulated as well. It’s crazy to think how some people simply rely on them without ever questioning it. I think that for certain topics, it’s still important to do your own research, even if you’re lazy as fuck.
You may find this funny or weird, but I recall on Bard or Google A.I it used to tell people to cook pizza you need to add super glue with how unwilling or stupid some people are I could imagine some doing that.

Another good idea was the idea all pregnant women must smoke two cigs to stay healthy but one bad or three bad.

I had as well good chat with an American guy about an issue his son had with fixing a computer, basically he followed Google A.I advise for the computer then broke so then the dad had to fix it for him.

To me that does suggest some will follow it without thinking it could be wrong or dangerous wrong.
 
Back
Top