ChatGPT GPT will literally study your background based on public information

notNewB

Regular Member
Founding Member
Bronze Star Bronze Star
Joined
Apr 9, 2025
Messages
75
Reaction Score
184
Feedback
0 / 0 / 0
A funny and creepy thing happened to a co-worker today. He's been using a single conversation with ChatGPT for a few days and he had told ChatGPT the name of the company we work for and his position - he is a motion designer.

Today, he asked ChatGPT for new creative video content ideas to use on social media. ChatGPT gave him a few ideas and at the bottom of its answer, it literally added "Do you want me to make a summary of all these ideas so you can propose them to ___, ___, or ___." These were the names of our founder, CEO, and the head of marketing... That's creepy, to say the least.

These names and positions are publicly available on LinkedIn but the co-worker never mentioned them in any of his conversation with ChatGPT. We know GPT-5's knowledge cutoff is late September, 2024 and we released our LinkedIn page later than that, so there is no way that our LinkedIn page was a part of the online data GPT-5 was trained on.

So, even if you ask ChatGPT a micro-specific question about a certain thing, it literally does online research on everything (studies literally every single piece of data available related to the question) before constructing its answer. I mean it's sort of sensible for it to work this way but when it suddenly mentions you gotta buy a cake on next Friday because it's your brother-in-law's birthday, and that it shouldn't be a coconut cake because your brother-in-law posted he has allergy to coconuts in his 2009 tweet, it creeps you out.
 
Last edited:
"Web Browsing: Despite the cutoff, the model can bridge knowledge gaps by using a live web browsing feature to find more current information."

Very nice Google ai mode.
 
Back
Top