OpenAI gives ChatGPT a memory

[ad_1]

OpenAI says ChatGPT's memory is opt-in by default, meaning the user has to actively turn it off. The memory can be erased at any point, either in Settings or simply by instructing the bot to erase it. Once the memory settings are cleared, that information will no longer be used to train its AI models. It's unclear how much of that personal data is used to train the AI Whereas Someone is chatting with a chatbot. And toggling memory doesn't mean you've completely opted out of training your chats on OpenAI's models; This is a separate opt-out.

The company also claims that it will not store some sensitive information in the memory. If you tell ChatGPT your password (don't do this) or Social Security number (or this), thankfully the app's memory forgets it. Jang also says OpenAI is still seeking feedback on whether other personally identifiable information, such as a user's ethnicity, is too sensitive for the company to auto-capture.

“We think there are a lot of use cases for that example, but right now we've trained the model to stay away from actively remembering that information,” Jang says.

It's easy to see how ChatGPT's memory function could go awry – instances where a user might have forgotten that they once asked the chatbot about kink, or an abortion clinic, or a non-violent way to deal with a mother-in-law, only to have it do so again. To be reminded of it or ask others to look for it in future chats. How ChatGPT's memory handles health data is also an open question. Nico Felix, an OpenAI spokesperson, says, “We have moved ChatGPT away from remembering some health details but it is still a work in progress.” So ChatGPT is the same song about the durability of the Internet, in a new era: Check out this great new memory feature, unless it's a bug.

OpenAI isn't even the first entity to mess with memory in generative AI. Google has emphasized “multi-turn” technology in its own LLM Gemini 1.0. This means you can interact with Gemini Pro using a single-turn prompt – a back-and-forth between the user and the chatbot – or a multi-turn, ongoing conversation in which the bot references previous messages. Remembers.

An AI framework company called Langchain is developing a memory module that helps large language models remember past interactions between the end user and the model. Harrison Chase, co-founder and CEO of Langchain, says giving LLMs long-term memory “can be very powerful in creating unique LLM experiences – a chatbot developing its own response to you as an individual based on what it knows about you.” “Can begin to adapt responses.” “Long-term memory loss can also create a distressing experience. No one wants to have to repeatedly tell a restaurant-recommendation chatbot that they're vegan.

This technique is sometimes referred to as “context retention” or “persistent reference” rather than “memory”, but the ultimate goal is the same: to make human-computer interactions feel so fluid, so natural that users easily What can a chatbot remember? This is also a potential boon for businesses deploying these chatbots that want to maintain an ongoing relationship with the customer on the other end.

“You can just think of these as some tokens that are being added to your conversation,” says Liam Fedas, research scientist at OpenAI. “The bot has some intelligence, and behind the scenes it's looking at the memories and saying, 'It looks like they're related; Let me mix them.' And then that goes onto your token budget.”

Fedas and Jang say that ChatGPT's memory is nowhere near the capacity of the human brain. And yet, almost in the same breath, Fedas points out that with ChatGPT's memory, you're limited to “a few thousand tokens.” If only.

Is this the hypervigilant virtual assistant that tech consumers have been promised for the past decade, or just another data-capture scheme that lets a tech company collect your likes, preferences, and personal information to better serve its users? Uses data? Possibly both, although OpenAI can't keep it that way. “I think the assistants of the past didn't have the intelligence,” Fedes said, “and now we're getting there.”

Will Knight contributed to this story.