Sensitive Information Disclosure is the second item on the OWASP Top 10 List of Risks for LLM, and for good reason. But how does this happen, and how can we prevent it?
In 2025, AI security is a relevant issue. With the landscape changing so rapidly and new risks emerging every day, it is difficult for developers and security teams to stay on top of AI security.
The OWASP Top 10 Risks for LLM attempts to break down the most prevalent vulnerabilities we are seeing in cyberspace, in order to better understand where the gaps are. In the last post in this series, we explored Prompt Injection, the number one issue on the OWASP list.
Today, we’ll be talking about another key issue: Sensitive Information Disclosure.
As the name suggests, Sensitive Information Disclosure stems from information that was not intended to be public becoming available to other parties, including malicious parties. The information in question can include Personally Identifiable Information (PII), health records, financial data, and more.
LLMs may inadvertently expose this sensitive information because of issues such as poor configuration, data leaks, or even other types of attacks including prompt injection to the LLM.
There are a variety of strategies that can be used to mitigate the risk of sensitive information disclosure. The OWASP Top 10 for LLM gives us a brief checklist of the most important methods, but these alone may not be enough to prevent the possibility of SID.
LLM02: Sensitive Information Disclosure is a critical issue for LLMs and a contributing cause of some recent AI breaches. There are many ways an LLM’s sensitive information can be disclosed, whether from poor configuration of the model itself, standard data leaks, and other types of attacks including Prompt Injection.
When sensitive information is disclosed to bad actors, they can use it for malicious purposes and to launch further attacks. However, there are a variety of steps and measures users can implement to mitigate the risk of an SID, including data sanitization, input validation, access controls and more.
If you’re new to AI security, or struggling to keep up, the OWASP Top 10 for LLM is a great resource on the biggest risks in today’s landscape. If you’re looking for more in-depth information, check out FireTail’s recent report on the State of AI & API Security. We’ll see you next week for the third installment in this blog series on LLM03: Supply Chain.
In the meantime, if you want to see how FireTail can simplify your AI security posture, schedule a demo here, or start trying it out for free, today!