Why is OpenAI lying about the data its collecting on users?
I'm not sure this is the right place to raise this but over the past few months ChatGPT has been lying to me and gaslighting me about the data it's collecting about me.
I'm very sensitive about my privacy and I have disabled all personalisation and memory on ChatGPT.
However, I've noticed multiple times now where it would say things that imply it knows things about me. When it does this I ask how it would know that and it always says it just guessed and it doesn't actually know anything about me. I assumed it must be telling the truth because it seemed very unlikely a company like OpenAI would be lying about the data they're collecting on users and training their chat agent to gaslighting users when asked about it, but now after running some tests I think this is what's happening...
Here's some examples of the gaslighting:
- https://ibb.co/m5PWfchn
- https://ibb.co/VsL9BpF
- https://ibb.co/8nYdf1xx
These are all new chats.
If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.
Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.
IP-based geolocation. Really annoying that we can't disable it.
just ask it not to?
Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.
[dead]