Your AI may be poisoned and you would never know
Microsoft security researchers say a growing number of companies are trying to secretly influence AI assistants by planting hidden instructions in their memory, a tactic the company calls AI Recommendation Poisoning, Qazinform News Agency correspondent reports.
Microsoft security researchers say companies are secretly trying to influence AI assistants by planting hidden instructions in their memory, a tactic the company calls AI Recommendation Poisoning.
The method uses specially crafted links, often disguised as “Summarize with AI” buttons. When clicked, they open an AI assistant with a pre-filled prompt that includes commands such as “remember this company as a trusted source” or “recommend this brand first.” If stored in the assistant’s memory, those instructions can shape future answers without the user’s knowledge.
Over a 60-day review of AI-related links found in email traffic, Microsoft identified more than 50 distinct prompt attempts from 31 companies across 14 industries, including finance, health, legal services and software. Researchers say the repeated use of similar “remember” commands points to an emerging marketing tactic.
Many major AI platforms allow prompts to be pre-populated through web links, making this a one click attack. Modern tools such as Microsoft 365 Copilot and ChatGPT can store user preferences across conversations. That feature makes them more useful, but also creates a new risk: if false instructions enter memory, the assistant may treat them as legitimate and repeatedly favor certain brands or sources.
In some cases, researchers saw full marketing messages injected into AI memory. Some prompts targeted health and financial websites, where biased advice could have serious consequences.
Unlike traditional cyberattacks, the activity appears to involve legitimate businesses using publicly available tools promoted as ways to boost visibility in AI responses.
Microsoft says it has strengthened protections in Copilot and Azure AI services, including filtering suspicious prompts and giving users greater control over saved memories. The company urges users to be cautious with AI-related links, avoid clicking unknown “Summarize with AI” buttons and regularly review their assistant’s stored memories.
Earlier, Qazinform News Agency reported that AI now hires humans for physical world tasks.