
ChatGPT will now tell you when your private data may be at most risk of leaking
India Today
OpenAI has rolled out Lockdown Mode and new Elevated Risk labels in ChatGPT to help users keep their sensitive information safer. According to the company, these tools will warn users about potentially risky features and limit external connections, reducing the chances of data leaks caused by prompt injection attacks.
AI chatbots like ChatGPT have become a go-to helper for many, whether it’s professional tasks or personal work. But at times, sharing sensitive data or using connected features could put your data at greater risk. To help users stay safer, OpenAI has introduced two new security features: Elevated Risk labels and Lockdown Mode. The labels will warn users when a web-connected feature could expose more data, while Lockdown Mode will limit how ChatGPT connects to outside systems.
The company says these features are designed to give users clearer alerts and more control over their information, while also reducing the risk of prompt injection attacks. “As AI systems take on more complex tasks — especially those that involve the web and connected apps — the security stakes change. One emerging risk has become especially important: prompt injection,” the company wrote in its official blog post. “We’re introducing two new protections designed to help users and organisations mitigate prompt injection attacks, with clearer visibility into risk and stronger controls.”
Prompt injection is a technique where hackers hide malicious instructions inside web pages or files to trick an AI system into revealing confidential information or taking unintended actions. As millions of people across the globe are using AI chatbots like ChatGPT, for work like reading documents, browsing the web, or connecting to other apps, the risks from such tricks become more serious.
OpenAI’s new tools in ChatGPT aim to protect users from these threats.
OpenAI has introduced Lockdown Mode as an optional setting that will tightly restrict how ChatGPT interacts with external systems. When enabled, it will limit or disable certain tools and connections, such as live web browsing or integrations that send and receive information from outside services. By reducing these external interactions, OpenAI aims to shrink the “attack surface” that hackers could exploit.
According to the company, Lockdown Mode is not necessary for most everyday users. It is mainly designed for people who handle highly sensitive information or believe they may be at elevated risk, such as journalists, executives, researchers or security professionals.













