What You Don't Log Will Hurt You - Webinar with Jeremy Snyder and John Tobin of Virtual Guardian
You'll find useful content about AI & API security, the cybersecurity landscape, news, events and much more updated regularly here.
We’ve talked before about the importance of logging in your AI and API security posture. But what happens when organizations fail to log their interactions adequately? And what can you do within your own organization to prevent this? Explore all this and more with the latest webinar from FireTail.
If you prefer to be notified of new posts via email, simply subscribe to our blog below.
APIs power all of the connections we take for granted in the modern internet. But as we rely on them more for new technologies like AI, securing them is harder than ever. That’s why continuous API security testing is an essential part of every cybersecurity posture.
Did you know that some AI chats capture and log your chat before you even submit it? This creates a huge security problem for both individuals and organizations whose employees use LLMs. Luckily, FireTail is working on a solution...
Our modern “Software as a Service” model is becoming a challenge for cybersecurity teams within large enterprises, as attacks continue to rise in volume and complexity across the cyber realm. Security needs to be a consideration from code to cloud, or any progress we make will be undone just as quickly.
OWASP’s Top 10 for LLM is a good starting point for teams to learn about AI security risks. In this series, we’ll go over each risk and practices to protect against them. Today, we’re tackling LLM02: Sensitive Information Disclosure.
FireTail, the leading AI & API security platform, has released its annual report, The State of AI & API Security 2025, revealing a critical blind spot in the way organizations are securing their AI investments. Despite record-breaking AI adoption, the report warns that most enterprises are overlooking the most exposed part of the AI stack: the API layer.
The AI race is driving developers to release more and more AI models in competition and security teams are struggling to keep up. So how do we continue to innovate at this speed, while still ensuring the security of our AI models?