What Are Prompt Injection Attacks? Experts Warn Of Vulnerabilities

What Are Prompt Injection Attacks? Experts Warn Of Vulnerabilities


Brave also published a blog post revealing new security vulnerabilities in AI browsers, following up on the earlier report of the Perplexity Comet vulnerability. The firm explained that indirect prompt injection was not an isolated problem, but a systemic danger to the broader category of agentic browsers.

The research highlighted two new attack vectors. In Perplexity Comet, malicious actors can embed nearly invisible instructions in website screenshots. When a user captures a screenshot and asks questions about it, the AI assistant may interpret these hidden prompts as commands, potentially using browser tools maliciously. Similarly, in the Fellou browser, just navigating to a webpage containing malicious visible instructions can cause the AI to process them alongside the user’s query, again enabling unintended actions.

These vulnerabilities, according to Brave, undermine traditional web security assumptions, including the same-origin policy, because AI agents possess the user’s authenticated credentials. Even innocuous actions, like summarising a Reddit post, could expose sensitive accounts, including banks and email services.

While the company continues to explore long-term solutions, it claimed that agentic browsing is inherently unsafe. Until strong protections are in place, browsers should isolate agentic behaviour and they would require plain user intervention for sensitive operations, Brave added.

OpenAI rolled out its Guardrails safety framework on Oct. 6, as part of its new AgentKit toolset for developers, aimed at offering an enhanced security framework for building secure AI agents. However, research firm HiddenLayer warned of an alarming flaw in the safety measures for Large Language Models (LLMs), according to Hackread.com, a new platform focused on hacking and cybersecurity.  



Source link