OpenAI provides users new privacy options for ChatGPT

BLOOMBERG

OpenAI is letting people opt to withhold their ChatGPT conversations from use in training the artificial intelligence (AI) company’s models. The move could be a privacy safeguard for people who sometimes share sensitive information with the popular AI chatbot.
The startup said that ChatGPT users can now turn off their chat histories by clicking a toggle switch in their account settings. When people do this, their conversations will no longer be saved in ChatGPT’s history sidebar (located on the left side of the webpage), and OpenAI’s models won’t use that data to improve over time.
OpenAI is aiming to make people feel more comfortable using the chatbot for all kinds of applications. For example, during a demo of the feature, the company used the example of planning a surprise birthday party.
“We want to move more in this direction where people who are using our products can decide how their data is being used — if it’s being used for training or not,” OpenAI Chief Technology Officer Mira Murati said.
In the months since ChatGPT was launched publicly, millions of people have experimented with it and other bots (such as Bard, created by Alphabet Inc’s Google). This new wave of AI chatbots is already being harnessed for everything from helping plan vacations to acting as an impromptu therapist, raising questions not just about how these systems can be used but also how the companies process the prompts people type into them. OpenAI said that its software filters out personally identifiable information that comes in from users.

Leave a Reply

Send this to a friend