A new wave of AI-powered web browsers like OpenAI’s ChatGPT Atlas and Perplexity’s Comet are taking aim at Google Chrome’s dominance, positioning themselves as the next-generation entry point to the internet. Their standout feature is the inclusion of web browsing AI agents—tools designed to complete online tasks autonomously by clicking through websites, filling forms, and retrieving information.
However, these capabilities come with growing concerns about user privacy. Many consumers may not realize the significant risks tied to these agentic browsing systems, which experts say are introducing new vulnerabilities the tech industry is still struggling to manage.
Cybersecurity specialists warn that AI browser agents could pose far greater privacy risks than traditional browsers. They urge users to think carefully about the level of access they grant these agents—and whether the benefits of convenience truly outweigh the dangers.
To function effectively, AI browsers like Comet and ChatGPT Atlas often request broad permissions, including access to a user’s email, calendar, and contact list. During independent testing, these tools were found to be moderately useful for simple web tasks, especially when granted extensive access.
Still, when it comes to more complex operations, current AI agents frequently stumble—taking too long or failing to complete the task altogether. For now, using them can feel more like a novelty demonstration than a true productivity boost.
The trade-off for this convenience is potentially steep. The biggest issue surrounding these AI agents is a new form of cyber threat called prompt injection attacks, which occurs when malicious instructions are embedded within a web page to manipulate the AI’s behavior.
When an AI agent processes such a page, it can unknowingly execute harmful commands, exposing personal information such as emails or account credentials—or even performing unintended actions like making online purchases or posting on social media.
Prompt injection attacks are a relatively new phenomenon, emerging alongside AI agents themselves. Unfortunately, the cybersecurity community hasn’t yet found a definitive way to prevent them, which raises the stakes as more consumers begin using tools like ChatGPT Atlas.
The privacy-focused browser company Brave, founded in 2016, recently published research concluding that indirect prompt injection attacks are a systemic challenge affecting the entire class of AI-based browsers. This follows earlier findings that identified similar issues with Perplexity’s Comet.
“There’s tremendous potential to make online life easier,” said Shivan Sahib, a senior privacy engineer at Brave. “But when the browser starts acting on behalf of the user, it crosses a fundamentally new and risky line in browser security.”
OpenAI’s Chief Information Security Officer, Dane Stuckey, also addressed these concerns in a public post this week. He acknowledged that “prompt injection remains an unsolved security problem,” warning that attackers will continue investing resources to exploit weaknesses in ChatGPT’s agent mode.
Perplexity’s own security team echoed similar sentiments, calling prompt injection such a severe issue that it “demands rethinking browser security from the ground up.” The company noted that these attacks can manipulate the AI’s decision-making process, effectively turning the agent’s abilities against its user.
Both OpenAI and Perplexity have rolled out preventive measures to limit the risks. OpenAI introduced a “logged-out mode”, where the browser agent navigates the web without access to a user’s personal accounts—reducing potential data exposure.
Perplexity, on the other hand, claims to have developed a real-time detection system that identifies and neutralizes prompt injection attacks as they occur. While promising, experts say these measures don’t guarantee total safety.
Cybersecurity researchers have praised these efforts but caution that neither company can claim its defenses are impenetrable. Even with safeguards, AI agents remain vulnerable to clever and constantly evolving attack methods.
Steve Grobman, Chief Technology Officer at McAfee, explained that the core issue lies in how large language models interpret information. These systems struggle to distinguish between valid instructions and malicious input, making it difficult to fully eliminate the threat.
“It’s a constant cat-and-mouse game,” Grobman said. “As defenses improve, so do the attackers’ techniques. Both sides are evolving rapidly.” He noted that prompt injection attacks have already advanced beyond simple hidden text—some now use images embedded with encoded commands to deceive AI agents.
Cybersecurity expert Rachel Tobac, CEO of SocialProof Security, advised users to stay vigilant. She warned that AI browser credentials could become a new target for hackers and urged users to protect their accounts with unique passwords and multi-factor authentication.
Tobac further suggested restricting what early versions of ChatGPT Atlas and Comet can access—particularly sensitive accounts tied to banking, healthcare, or personal data. As these technologies mature, their security measures will improve, but for now, caution remains the safest strategy.
Also Read:
Top 5 Best AI Tools for Social Media to Try In 2025!
Instagram users can now access Meta AI editing tools directly in IG Stories








