AI Security Platform Update - Join us Thursday, June 26, for a webinar on our new AI security features.
In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more…
Computers going rogue used to be the stuff of science fiction. But in 2025, it is becoming real. Join us in this blog as we investigate some cases where Artificial Intelligence has behaved like it has a mind of its own…
We’ve talked before about Mean Time To Attack, or MTTA, which has grown alarmingly short for new vulnerabilities across the cyber landscape. In this blog, we’ll dive into the “how” and “why” of this…
The OWASP Top 10 List of Risks for LLMs helps developers and security teams determine where the biggest risk factors lay. In this blog series from FireTail, we are exploring each risk one by one, how it manifests, and mitigation strategies. This week, we’re focusing on LLM03: Supply Chain vulnerabilities.
Did you know that some AI chats capture and log your chat before you even submit it? This creates a huge security problem for both individuals and organizations whose employees use LLMs. Luckily, FireTail is working on a solution...
Our modern “Software as a Service” model is becoming a challenge for cybersecurity teams within large enterprises, as attacks continue to rise in volume and complexity across the cyber realm. Security needs to be a consideration from code to cloud, or any progress we make will be undone just as quickly.
OWASP’s Top 10 for LLM is a good starting point for teams to learn about AI security risks. In this series, we’ll go over each risk and practices to protect against them. Today, we’re tackling LLM02: Sensitive Information Disclosure.
FireTail’s latest platform update gives customers expanded AI security features and discovery capabilities to better find, document and protect AI initiatives across your organization. Here, we look at what the update covers and the benefits these new features deliver for FireTail customers.
Security teams today face a dual challenge: protecting AI systems from external threats while securing the APIs that power them. The reality is clear—if your APIs aren’t secure, neither is your AI.
A BreachForum user came out claiming to have breached OmniGPT and shared samples of stolen data to back up this claim. Weeks later, researchers are still scrambling to figure out the scope, attack method, and more.
Today’s cyber landscape is littered with threats, risks, and vulnerabilities. Every week, we are seeing an increase not only in attacks, but also in the methods used to attack. This week, a new family of malware was discovered exploiting Microsoft’s Graph API.
AI security and API security run alongside each other, much like a double rainbow. Each one contains a full spectrum of security requirements that work in tandem with one another.
AI is revolutionizing industries at an unprecedented pace. But as organizations integrate AI into their workflows, they are encountering serious security risks. In fact, 97% of organizations using generative AI have reported security incidents. Traditional security tools are failing to keep up, leaving companies vulnerable to data breaches, adversarial attacks, and compliance risks.
We’re only a month into the new year, and already, the internet is buzzing with news about AI. Most recently, China’s AI platform DeepSeek has been making whale-sized waves in the cyberworld. But a week after the launch of its new model on January 20th, a tsunami-sized wave hit the system.
In 2025, AI is the biggest advancement in cybersecurity and the talk of all tech-sperts. But as AI continues to develop, we are seeing a surge in not only the benefits, but also the risks of artificial intelligence.
GDPR demands transparency, accountability, and user control over personal data. However, many organizations are inadvertently falling short of these obligations due to the unmonitored integration of AI tools—often via APIs—into their systems. The result? Compliance gaps that could lead to fines, operational chaos, and reputational damage.
Everybody is talking about AI right now. It's the hottest topic in tech. But few people are talking about the APIs that underpin these AI platforms. Here we look at why effective API security is a must for any organization who wants to harness the power of AI.
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.