Building an AI Governance Program: Lessons from the Enterprise - Free Webinar on December 11th, 2025
You'll find useful content about AI security, the cybersecurity landscape, news, events and much more updated regularly here.
Researcher Viktor Markopoulos discovers ASCII Smuggling bypasses human audit via Unicode, enabling enterprise identity spoofing and data poisoning on Gemini & Grok.
FireTail has been selected as a finalist for the Blackhat Asia 2025 Startup Spotlight Competition. We're delighted to be taking part and can't wait to showcase how FireTail helps enterprises discover, assess, and secure AI usage while preventing threats like shadow AI, data leaks, and AI-specific attacks.
Researchers recently found a vulnerability in Apache Tomcat’s servers that would allow an attacker to commit Remote Code Execution with a single PUT request to a specific API, followed by a GET. And now, this vulnerability is officially being exploited in the wild.
Security teams today face a dual challenge: protecting AI systems from external threats while securing the APIs that power them. The reality is clear—if your APIs aren’t secure, neither is your AI.
A BreachForum user came out claiming to have breached OmniGPT and shared samples of stolen data to back up this claim. Weeks later, researchers are still scrambling to figure out the scope, attack method, and more.
Today’s cyber landscape is littered with threats, risks, and vulnerabilities. Every week, we are seeing an increase not only in attacks, but also in the methods used to attack. This week, a new family of malware was discovered exploiting Microsoft’s Graph API.
AI security and API security run alongside each other, much like a double rainbow. Each one contains a full spectrum of security requirements that work in tandem with one another.
If you prefer to be notified of new posts via email, simply subscribe to our blog below.