LLM06: Excessive Agency

Today, we are taking a deep dive into OWASP #6 risk for LLMs- Excessive Agency. Read on to learn all about it- what it is, how it occurs, and how to prevent it with your own organization’s LLMs.

LLM06: Excessive Agency

In 2025, we are seeing an unprecedented rise in the volume and scale of AI attacks. Since AI is still a relatively new beast, developers and security teams alike are struggling to keep up with the changing landscape. The OWASP Top 10 Risks for LLMs is a great jumping-off point to gain insight into the biggest risks and how to mitigate them.

Excessive Agency

Agency refers to a model’s ability to call functions, interface systems, and undertake actions. Developers grant each AI agent a necessary degree of agency depending on its use case. 

When an LLM malfunctions, an AI agent should respond appropriately according to the agency it’s been given. However, Excessive Agency occurs when an AI agent responds inappropriately, performing “damaging actions” in response to unusual LLM outputs. 

Excessive Agency is ultimately caused by design flaws, stemming from one of the following:

  • Excessive functionality: an LLM has access to extensions which include functions not needed to perform its job, or it may still have access to plugins from the development phase that are no longer needed
  • Excessive permissions: an LLM has permissions for downstream functionality and systems not originally intended
  • Excessive autonomy: an LLM performs actions that it has not been approved for.

And the effects of Excessive Agency vulnerabilities can be catastrophic, leading to PII breaches, financial losses, and more. However, there are ways to mitigate and prevent Excessive Agency.

  • Limit extensions: Only allow the LLM to interact with the minimum necessary amount of extensions.
  • Know your agents: If you can’t see it, you can’t secure it! Keep a centralized inventory to track all agents and interactions.
  • Limit extension functionality: Ensure that the functions implemented to an LLM’s extensions are strictly necessary for its intended purpose.
  • Assess your agents: Test agents as a whole, including the sum of their application code.
  • No open-ended extensions: Open-ended extensions with more granular functionality are not strictly necessary, and open the LLM up to more vulnerabilities than they are worth.
  • Require human approval: For some high-impact actions, it may be necessary to have guardrails around them that require permission from an actual user.
  • Assess application code: Assess for input and output handling to see where upstream and downstream vulnerabilities lay.
  • Sanitize LLM inputs and outputs: Sanitization is a best practice for AI security in general, but particularly following OWASP’s recommendations around Application Security Verification Standards (ASVS) and focusing on input sanitization is critical.
  • Documentation is king: We’ve said it before and we’ll say it again, log everything carefully and monitor those logs with detections.
  • Complete mediation: Instead of relying on an LLM to decide if an action is allowed, implement authorizations in downstream systems and enforce the complete mediation principle so all requests must be validated before completion.

Overall, Excessive Agency occurs when an LLM performs actions and behaves in ways outside of what it was created for. Therefore, it is a huge risk to AI security and needs to be mitigated by secure coding and developing practices such as implementing authorizations, sanitizing data, and more.

To learn how FireTail can help you protect against Excessive Agency and the other risks outlined in the OWASP Top 10 for LLM, set up a demo or get started with our free tier, today.