Building an AI Governance Program: Lessons from the Enterprise - Free Webinar on December 11th, 2025
The OWASP Top 10 Risks for LLMs helps shed light on the top vulnerabilities to AI in today’s landscape. In this blog, we’ll go over LLM09: Misinformation. What it is, how to mitigate it, and more.
In an ecosystem of constantly rising AI threats and attacks, the OWASP LLM Top 10 is here to give guidance on the biggest risks in the landscape and how to combat them. Today’s blog dives into #8: Vector and Embedding Weaknesses.
Learn how to detect Shadow AI across your organization, spot early risks, and keep data compliant with Firetail’s real-time AI visibility platform.
Discover what Shadow AI is, why it matters for enterprise AI security, and how Firetail helps eliminate risks with detection and governance solutions
In 2025, AI is revolutionizing our cyber landscape and changing everything we know about cybersecurity. Luckily, the NIST AI Risk Management Framework is here to help. Join us for an in-depth exploration of the AI RMF, which is updated for the present landscape.
The OWASP Top Ten Risks for LLMs is a comprehensive list for security researchers to assess vulnerabilities in AI models. Today’s blog will dive in-depth into item 7: System Prompt Leakage.
Once again, Docker APIs are a target of threat actors in a new method of attack dating back to June 2025, or even earlier. Research is ongoing.
Resource Policies let you set automated guardrails for your AI resources, catching changes and policy violations the moment they happen. They help teams reduce risk, enforce governance, and maintain continuous compliance without manual effort.
Agentic AI is introducing new risks to cybersecurity worldwide. The OWASP Top 10 Risks for LLMs breaks down the biggest risks in the landscape. Today’s blog will tackle #LLM06: Excessive Agency.
FireTail was one of four startups selected as a finalist in the Black Hat USA 2025 Startup Spotlight Competition. This week was unforgettable and reaffirmed the urgent demand for AI security solutions.
As AI attacks increase, it is more important than ever to be aware of risks. The OWASP Top 10 Risks for LLMs is a great jumping off point. In this blog, we’ll be deep-diving the 5th item on the list: Improper Output Handling.
It is no secret in 2025 that AI can be abused to launch attacks by threat actors. But the “how” and “why” of these use cases is continuing to change. A recent security report revealed many of the ways in which OpenAI’s ChatGPT could be exploited.
In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more…
Computers going rogue used to be the stuff of science fiction. But in 2025, it is becoming real. Join us in this blog as we investigate some cases where Artificial Intelligence has behaved like it has a mind of its own…
We’ve talked before about Mean Time To Attack, or MTTA, which has grown alarmingly short for new vulnerabilities across the cyber landscape. In this blog, we’ll dive into the “how” and “why” of this…
The OWASP Top 10 List of Risks for LLMs helps developers and security teams determine where the biggest risk factors lay. In this blog series from FireTail, we are exploring each risk one by one, how it manifests, and mitigation strategies. This week, we’re focusing on LLM03: Supply Chain vulnerabilities.
Did you know that some AI chats capture and log your chat before you even submit it? This creates a huge security problem for both individuals and organizations whose employees use LLMs. Luckily, FireTail is working on a solution...
Our modern “Software as a Service” model is becoming a challenge for cybersecurity teams within large enterprises, as attacks continue to rise in volume and complexity across the cyber realm. Security needs to be a consideration from code to cloud, or any progress we make will be undone just as quickly.
OWASP’s Top 10 for LLM is a good starting point for teams to learn about AI security risks. In this series, we’ll go over each risk and practices to protect against them. Today, we’re tackling LLM02: Sensitive Information Disclosure.
FireTail’s latest platform update gives customers expanded AI security features and discovery capabilities to better find, document and protect AI initiatives across your organization. Here, we look at what the update covers and the benefits these new features deliver for FireTail customers.
Security teams today face a dual challenge: protecting AI systems from external threats while securing the APIs that power them. The reality is clear—if your APIs aren’t secure, neither is your AI.
A BreachForum user came out claiming to have breached OmniGPT and shared samples of stolen data to back up this claim. Weeks later, researchers are still scrambling to figure out the scope, attack method, and more.
Today’s cyber landscape is littered with threats, risks, and vulnerabilities. Every week, we are seeing an increase not only in attacks, but also in the methods used to attack. This week, a new family of malware was discovered exploiting Microsoft’s Graph API.
AI security and API security run alongside each other, much like a double rainbow. Each one contains a full spectrum of security requirements that work in tandem with one another.
AI is revolutionizing industries at an unprecedented pace. But as organizations integrate AI into their workflows, they are encountering serious security risks. In fact, 97% of organizations using generative AI have reported security incidents. Traditional security tools are failing to keep up, leaving companies vulnerable to data breaches, adversarial attacks, and compliance risks.
We’re only a month into the new year, and already, the internet is buzzing with news about AI. Most recently, China’s AI platform DeepSeek has been making whale-sized waves in the cyberworld. But a week after the launch of its new model on January 20th, a tsunami-sized wave hit the system.
In 2025, AI is the biggest advancement in cybersecurity and the talk of all tech-sperts. But as AI continues to develop, we are seeing a surge in not only the benefits, but also the risks of artificial intelligence.
GDPR demands transparency, accountability, and user control over personal data. However, many organizations are inadvertently falling short of these obligations due to the unmonitored integration of AI tools—often via APIs—into their systems. The result? Compliance gaps that could lead to fines, operational chaos, and reputational damage.
Everybody is talking about AI right now. It's the hottest topic in tech. But few people are talking about the APIs that underpin these AI platforms. Here we look at why effective API security is a must for any organization who wants to harness the power of AI.
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.