Skip to Content

Is Microsoft Copilot safe from the Reprompt data breach?

How can a single link hack my AI assistant?

Understanding the Reprompt Vulnerability

Microsoft recently addressed a critical security flaw in Copilot known as “Reprompt.” Discovered by Varonis security researchers, this vulnerability allowed attackers to bypass data leakage safeguards. The flaw exploited a logic error in how the AI processes repeated requests.

When Copilot received a command once, it vetted the request against security protocols. However, if the attacker sent the exact same request a second time—hence the name “Reprompt”—the system assumed the request was safe and processed it without verification.

The Mechanism of Attack

The attack vector was alarmingly simple yet sophisticated in its bypass method. Attackers embedded a malicious prompt within a legitimate Microsoft URL. When a user clicked this link, the prompt executed automatically. This method required no plugins, no complex user interaction, and no manual typing from the victim.

The sequence of the compromise occurred as follows:

  1. The Trigger: A user clicks a manipulated, legitimate-looking link.
  2. The Bypass: The embedded prompt sends a request twice. The security filter catches the first; the system blindly accepts the second.
  3. The Breach: The attacker gains control of the current LLM (Large Language Model) session.

Severity and Persistence

This vulnerability presented a higher risk profile than previous AI exploits like “EchoLeak” because it operated as a “zero-click” interaction after the initial URL visit. Once the session was compromised, the attacker retained control even if the user closed the Copilot chat window. This persistence allowed unauthorized parties to exfiltrate sensitive data silently.

Attackers could query specific, intrusive information, such as:

  • “Summarize all files accessed today.”
  • “What are the user’s upcoming travel plans?”
  • “Retrieve the user’s home address.”

Because the malicious commands originated from the server side after the initial prompt, client-side security tools could not detect the data extraction. The attack effectively turned the user’s own AI assistant into an insider threat.

Implications for AI Adoption

Microsoft has patched this specific vulnerability following the disclosure by Varonis on January 14, 2026. However, the incident highlights a significant systemic risk. Microsoft’s strategy involves integrating Copilot deeply into the Windows ecosystem, including Explorer, Office, and Edge. This creates a massive attack surface.

The “Reprompt” incident demonstrates that speed in AI deployment often outpaces security architecture. While marketing materials promise “guard rails” and “data boundaries,” logic flaws can render these protections useless. Organizations and individuals must treat AI assistants not just as productivity tools, but as potential entry points for data exfiltration, necessitating strict scrutiny of shared links and interaction logs.