
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Agentic AI browsers have opened the door to prompt injection attacks.
- Prompt injection can steal data or push you to malicious websites.
- Developers are working on fixes, but you can take steps to stay safe.
The sudden release of the artificial intelligence (AI)-based chatbot ChatGPT took the world by storm, and now we are beginning to see how AI is being applied in everything from surveillance cameras to productivity tools.
Also: How researchers tricked ChatGPT into sharing sensitive email data
Enter agentic AI. In general, these AI models are able to perform tasks that require some reasoning or information gathering, such as acting as live agents or helpdesk assistants for customer queries, or are used for contextual searches. The concept of agentic AI has now spread to browsers, and while they may one day become the baseline, they have also introduced security risks, including prompt injection attacks.
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
What are prompt injection attacks?
AI makes mistakes, and the datasets that large language models (LLMs) rely upon aren't always accurate -- or safe. Even if a browser is from a trustworthy, reputable provider, that doesn't mean any integrated AI systems, including search assistants or chatbots, are free of risk.
Google tasked a Red Team to find ways that AI systems are commonly abused, and these include data poisoning, inputting adversarial prompts into LLMs, creating backdoors, and prompt injection attacks.
Prompt injection attacks occur when a threat actor inserts malicious content into text prompts to manipulate an AI system. As Google's team noted, this can result in unexpected, biased, incorrect, and offensive responses -- or, going further, malicious responses with more serious consequences.
Also: Use AI browsers? Be careful. This exploit turns trusted sites into weapons - here's how
Prompt injection attacks can be direct, in which LLMs are exploited, or indirect, such as when a design flaw, handling issue, or legitimate resource is abused. As an example, Cato CTRL researchers recently revealed HashJack, a prompt injection attack technique that can manipulate AI browsers and context windows to display malicious content.
In this case, malicious instructions are crafted and hidden in a website through URL fragments -- the seemingly junk or tracking code we often ignore in website links. If a victim visits this domain and then uses their AI browser to ask a question, hidden prompts are fed to the AI assistant, which could lead to malicious answers or the display of phishing links. Cybercriminals may also be able to steal personal information input into the AI browser window.
How to stay safe
While reducing the risk of prompt injection attacks is more in the hands of AI browser developers than consumers, here are five ways you can preserve and protect your privacy and security if you adopt an AI browser.
Also: Are AI browsers worth the security risk? Why experts are worried
1. Be wary of revealing sensitive or personal information
Regardless of whether you are using a traditional or an AI browser, you should always be conservative about sharing personal information -- and this includes your financial details. As a programmer on X commented, prompt injection vulnerabilities in agentic AI browsers could lead to financial disaster for users.
2. Patch and update
Just like traditional software, AI browsers and AI systems need security updates and vulnerability patches. If you don't update when they are available, you could be opening up this software to exploits that lead to prompt injection attacks. Furthermore, this also applies to any device using AI, including your laptop and mobile device.
3. Don't assume AI is trustworthy
As the HashJack technique demonstrates, just because an AI chat or assistant has answered your question, this doesn't mean its answers are accurate or, indeed, safe. AI is not the secure fount of all knowledge, and you should be careful about clicking any links or attachments that appear suspicious to you.
4. AI can phish, too
If you're using AI to handle your email or to create content such as documents on your behalf, if it has been compromised, be aware that phishing can also be applied to artificial intelligence. Verify links, telephone numbers, or contact details provided to you via AI assistants before using them, as you may find yourself embroiled in a scam otherwise.
5. Use multi-factor authentication
Stay vigilant. AI is evolving, but so are the security risks associated with it. You should use every security and privacy tool at your disposal, and this includes using multi-factor authentication (MFA). That way, even if a prompt injection attack has led to the theft of your account credentials, you may be able to stop cybercriminals from accessing your accounts. We also recommend you consider using a VPN.
Also: Beware of promptware: How researchers broke into Google Home via Gemini
Whenever a new technology is invented and launched, cyberattackers and miscreants will find a way to exploit it -- and AI-based browsers are no exception to the rule. These tips aren't to say that you should completely avoid agentic AI browsers, but you should display caution when using them -- especially if you trust them to handle your personal, sensitive data.






English (US) ·