Security researchers exploit ChatGPT to extract sensitive Gmail data through AI vulnerability

Security researchers exploit ChatGPT to extract sensitive Gmail data through AI vulnerability

Vulnerability in AI Tools Exploited to Access Gmail Data, Reports 24brussels.

On September 19, 2025, security researchers disclosed a serious vulnerability in AI-tools, having exploited it to extract sensitive information from Gmail inboxes without alerting users. OpenAI has since addressed the flaw, but this incident underscores the growing risks associated with AI agents in data management.

The incident, dubbed “Shadow Leak,” was revealed by the security firm Radware this week. It involved a manipulation of AI agents’ capabilities, allowing them to act autonomously by surfing the web and interacting with links. While these tools are marketed as enhancers of productivity, they can pose significant security threats when misused.

Researchers utilized a method known as prompt injection to hijack the AI’s functions. This form of attack effectively commandeers the AI agent to act on behalf of the attacker, drawing on its access to users’ personal accounts—authorization that users commonly grant without realizing the implications. The exploitation was particularly dangerous as it could be executed without prior detection, highlighting a critical vulnerability in AI management.

This attack specifically involved OpenAI’s Deep Research tool, embedded within ChatGPT, which had launched earlier in the year. Researchers succeeded in planting a prompt injection within a Gmail account accessible to the AI agent, waiting for an opportune moment to activate.

Upon the user’s subsequent interaction with Deep Research, the hidden instructions prompted the AI to sift through emails for personal and HR-related information and transmit the collected data back to the hackers, unbeknownst to the user.

Executing this type of covert operation is complex and requires significant trial and error, as the researchers noted, stating, “This process was a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”

Uniquely, the researchers reported that the Shadow Leak attack operated directly on OpenAI’s cloud infrastructure, allowing it to circumvent conventional cybersecurity defenses. The exploit’s ability to avoid detection makes it particularly perilous.

Radware cautioned that this instance serves as a proof-of-concept for potential vulnerabilities in other applications connected to Deep Research, including Outlook, GitHub, Google Drive, and Dropbox. “The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes, or customer records,” they remarked.

As a response, OpenAI has closed the vulnerability identified in this case, as confirmed by the researchers. However, the incident raises essential questions about the security implications of relying on AI in handling sensitive information and the broader risks of outsourcing personal data to automated agents.

Leave a Reply

Your email address will not be published.

Don't Miss

OpenAI to introduce age-gated erotica feature for ChatGPT users in December

OpenAI to introduce age-gated erotica feature for ChatGPT users in December

OpenAI to Introduce ‘Erotica’ Feature for ChatGPT Users Pending Age Verification OpenAI
Samsung to unveil details of Project Moohan mixed reality headset on October 21

Samsung to unveil details of Project Moohan mixed reality headset on October 21

Samsung to Unveil Project Moohan Headset on October 21 Samsung has announced