Today, we are taking a deep dive into OWASP #6 risk for LLMs- Excessive Agency. Read on to learn all about it- what it is, how it occurs, and how to prevent it with your own organization’s LLMs.
In 2025, we are seeing an unprecedented rise in the volume and scale of AI attacks. Since AI is still a relatively new beast, developers and security teams alike are struggling to keep up with the changing landscape. The OWASP Top 10 Risks for LLMs is a great jumping-off point to gain insight into the biggest risks and how to mitigate them.
Agency refers to a model’s ability to call functions, interface systems, and undertake actions. Developers grant each AI agent a necessary degree of agency depending on its use case.
When an LLM malfunctions, an AI agent should respond appropriately according to the agency it’s been given. However, Excessive Agency occurs when an AI agent responds inappropriately, performing “damaging actions” in response to unusual LLM outputs.
Excessive Agency is ultimately caused by design flaws, stemming from one of the following:
And the effects of Excessive Agency vulnerabilities can be catastrophic, leading to PII breaches, financial losses, and more. However, there are ways to mitigate and prevent Excessive Agency.
Overall, Excessive Agency occurs when an LLM performs actions and behaves in ways outside of what it was created for. Therefore, it is a huge risk to AI security and needs to be mitigated by secure coding and developing practices such as implementing authorizations, sanitizing data, and more.
To learn how FireTail can help you protect against Excessive Agency and the other risks outlined in the OWASP Top 10 for LLM, set up a demo or get started with our free tier, today.