in

You can now edit Microsoft Copilot’s memories about you – here’s how

anilakkus/iStock/Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source<!–> on Google.


ZDNET’s key takeaways

  • Copilot can now remember or forget details based on your command.
  • Copilot’s memories can be viewed in Settings > User memory.
  • Greater memory comes with greater risk.

Microsoft’s Copilot AI assistant can now be explicitly prompted to remember or forget particular details about users’ lives. In an X post on Monday, Mustafa Suleyman, CEO of the company’s AI division, announced that those individual memory preferences will, in turn, shape the chatbot’s future responses.

Also: You can now chat with third-party apps in ChatGPT – here’s how

For example, you can now ask Copilot to remember that you’re vegetarian, so that it takes that dietary restriction into account when responding to your later requests for local restaurant recommendations. Or you might instruct it to remember your new partner’s name and birthday; if it doesn’t work out, you can always tell it to forget what’s-her-name. 

The new memory feature could also be useful if you’re trying to build a new habit, like writing in your journal every morning. Simply ask Copilot to send you a daily reminder to journal right after you wake up. You can use the commands “Forget” and “Remember,” as Microsoft’s example shows. 

Copilot will keep track of its memories, which you can view and manually edit by clicking Settings > User memory. The new features are live now across desktop and mobile. 

Striking a balance

In their ongoing efforts to build AI assistants that are maximally engaging and useful across a broad set of tasks, tech developers have had to strike a delicate balance between memory and forgetfulness.

Also: How to use ChatGPT freely without giving up your privacy – with one simple trick

Train a chatbot to remember every little detail about a user’s life, and it could create a lag every time the user queries it (aside from the privacy concerns of increasingly giving a chatbot personal information). A chatbot that just forgets everything that a user tells it, on the other hand, isn’t much more useful than a Google search. 

–>

Rather than taking a one-size-fits all approach to the memory-forgetfulness problem, companies have essentially been outsourcing it to individual users themselves, giving them the ability to modify the extent to which AI systems can recall their personal information.

Building more useful AI assistants

Microsoft first introduced a “personalization and memory” feature for Copilot in April of this year, positioning it as an important step toward building an AI companion that understands the unique context and preferences of individual users. 

Through the feature, each exchange with the chatbot goes toward its corpus of training data, so that over time, it’s able to build more fine-grained user profiles — similarly to how the algorithms powering social media apps like Instagram and TikTok personalize their feeds to individual users over time.

Also: Anthropic’s open-source safety tool found AI models whisteblowing – in all the wrong places

“As you use Copilot, it will take note of your interactions, creating a richer user profile and tailored solutions you can depend on,” Microsoft wrote in a May blog post. “From suggestions for a new vacation spot to a product you might enjoy, Copilot is your go-to AI companion that helps you feel understood and seen.”

This followed closely on the heels of a similar update to ChatGPT’s memory capabilities, enabling it to reference all of a user’s past conversations in order to more effectively tailor its responses. Anthropic also announced in August that Claude could be prompted to retrieve information from previous exchanges — though that feature is turned on by default, users can also manually turn it off.

Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter.

All of these efforts are geared toward building chatbots that are more than mere question-answering machines, and closer to a trusted friend or colleague that’s able to get to know users and update their understanding of them over time. 

The risks of remembering

A chatbot’s ability to remember information over time and build detailed user profiles is not without risks, however. 

Also: You can use ChatGPT to build a personalized Spotify playlist now – here’s how

In the event of a data breach, sensitive information shared by individual users or organizations could be leaked. At the psychological level, an AI chatbot that gradually learns about a person’s communication style and beliefs over time could subtly push that person into delusional patterns of thought — a phenomenon that’s now widely described in the media (not by psychiatrists) as “AI psychosis.” That’s also notable given the recent controversy around AI companions.

Giving users the ability to turn off or modify a chatbot’s memory feature is a good first step, but not all users are savvy enough to know how to take these steps, or even be aware of the fact that the information they’re sharing is being stored in a server somewhere. 

Also: How people actually use ChatGPT vs Claude – and what the differences tell us

While the European Union’s General Data Protection Regulation (GDPR) requires tech companies to disclose when they’re collecting and processing users’ personal data — such as their name, address, or preferences — no such comprehensive regulation currently exists in the US, meaning the transparency policies of tech developers themselves are the only mechanism ensuring users have an understanding of how their personal information is being saved and used by chatbots.

–>


Source: Robotics - zdnet.com