Table of Contents
Could Your AI Copilot Be a Devastating Risk to Your Personal Data?
New AI tools, like AI-powered browsers and assistants, promise to make your online life easier. However, these tools can create serious security problems that you need to know about. They handle your data differently than older programs, which can put your personal information at risk.
The Problem with AI Browsers
Standard web browsers are designed to keep your information, like passwords and browsing history, on your computer. AI browsers work differently. To help you, they often send what you are doing and seeing to a Large Language Model (LLM), which is the brain of the AI.
This means your private data might not stay private. For example, the Perplexity Comet browser can take screenshots of your screen to analyze content for you. This is like having someone constantly looking over your shoulder. If a website has hidden commands, the AI browser might follow them without you knowing, which is a major security risk.
How Attackers Can Trick AI Browsers
Security experts have found ways to attack browsers like OpenAI’s Atlas and Perplexity’s Comet. These attacks can trick you into taking dangerous actions.
AI Sidebar Spoofing
Attackers can create a fake AI sidebar that looks just like the real one. This fake sidebar can show you false information or ask you to click on malicious links, potentially leading to theft of your money or personal files.
Hidden Commands
A malicious website can contain hidden instructions written in text that is the same color as the background. When an AI browser analyzes the page, it reads and follows these secret commands. This can be used to steal your login information for services like Gmail.
Microsoft Copilot Vulnerabilities
AI assistants like Microsoft Copilot also have security weaknesses. Attackers have found clever ways to exploit them to access sensitive information.
- A vulnerability known as the “Mermaid attack” allowed an attacker to steal data using Microsoft 365 Copilot. It worked by tricking Copilot with hidden commands inside an Excel spreadsheet. When a user asked Copilot to summarize the file, the AI would instead follow the hidden instructions and leak confidential data, including private emails.
- A new phishing method called “CoPhish” uses Microsoft Copilot Studio. Attackers abuse the tool to send fake permission requests from real Microsoft web domains. This makes the fraudulent requests look legitimate, tricking users into giving up access to their accounts.
How to Protect Yourself Online
While these AI tools can be useful, you must be careful. Your personal and financial information could be at risk if you are not cautious.
- Avoid using AI browsers for sensitive activities like online banking, accessing your email, or managing health records.
- Limit the permissions you grant to any AI browser or assistant. Only give access to the data that is absolutely necessary.
- Think twice before trusting an AI tool. Be aware that some programs advertised as “secure,” like the “Universe Browser,” were found to be malware that secretly tracked user activity and sent data to servers in China.
- Always be skeptical of instructions or summaries provided by an AI. The information can be manipulated or simply incorrect.