top of page
Search

Is everything you ask ChatGPT public?

  • Writer: Pamela Minnoch
    Pamela Minnoch
  • Jul 23
  • 3 min read

This is a question I get all the time. Let's take a look at it in plain language and without the techy spin.


You've probably seen the little message that says "your chats may be used to improve the model." That gets people thinking - Who exactly sees my prompts?Are they stored? Could they be leaked?


Here's the short answer, No, your ChatGPT chats aren't public. But they're not completely private either.


So where do your prompts go?

When you hit "send", your message lands on OpenAI's servers. It's not visible to the public, and it's encrypted to protect it from outsiders. But internally? OpenAI staff and contractors can review chat snippets. Usually this is to check for abuse or help improve how the model works. These reviewers are under tight NDAs, but let's be honest, perfect secrecy doesn't exist.


Could someone hack OpenAI? Unlikely, they've got solid security in place. But if you're typing highly sensitive information into any web tool, you're taking a risk.


What about model training?

Here's the important bit: Unless you've opted out in settings, your chats can be used to train future versions of ChatGPT.


That means unless you're using the enterprise, API, education, or ChatGPT Teams plans your messages are eligible.


Free and Plus ($20/month) users? Your data is kept for at least 30 days, and by default, it can be used to improve the model unless you manually switch it off.


You can do that under Settings > Data Controls > Improve the model for everyone.


But what actually happens during training?

The model doesn't "remember" your conversation like a person would. It doesn't store a transcript of your chat or understand meaning the way we do.


What it does is break down text into tokens (chunks of words or characters), and then it learns patterns. Basically predicting what comes next.


So when you write, "I feel sick today" ChatGPT doesn't "know" that you're unwell. That's just a familiar sentence it's seen thousands of times. It's too generic to stick.


But if you paste in something unique, like your IRD number, or confidential business info there's a small risk that the model might memorise it. And if it's memorised, it might show up again later in someone else's chat. That's what researchers call memorisation risk.


It's rare, but it's real and it's happened before.


So what should you do?

Here's a simple rule of thumb: If it's unique and private, don't paste it in. If it's generic and harmless, you're fine.


Asking ChatGPT to help draft an email? Totally safe.

Brainstorming social media posts or checking your spelling? Go for it.

Sharing internal HR reports, passwords, or unpublished ideas? Please don't.


And if you work in a government agency or business where privacy matters, use a ChatGPT Teams or enterprise account, or make sure staff know how to turn off training.


Recap

Your ChatGPT chats are not being published online, but they're not invisible either.


If you'd be uncomfortable seeing what you typed printed in a report, or discussed in a team meeting, it doesn't belong in a public model.


ChatGPT is a powerful tool. But like any tool, it works best when you know how to use it wisely.


If you're not sure what's safe or want to upskill your team on ethical, secure AI use, get in touch.


 
 
 

Commenti


bottom of page