Beyond OpenClaw: How to govern agentic AI in the enterprise - Free Webinar 26th February 2026
You'll find useful content about AI security, the cybersecurity landscape, news, events and much more updated regularly here.
The OpenClaw incident proves "Shadow Agents" are already running on enterprise networks, installed by high-performing employees to automate grunt work. This post analyzes why traditional security stacks fail to detect these autonomous tools
As AI attacks increase, it is more important than ever to be aware of risks. The OWASP Top 10 Risks for LLMs is a great jumping off point. In this blog, we’ll be deep-diving the 5th item on the list: Improper Output Handling.
FireTail has been selected as aone of just four finalists for the Blackhat USA 2025 Startup Spotlight Competition. We're delighted to be taking part and can't wait to showcase how FireTail helps enterprises discover, assess, and secure AI usage while preventing threats like shadow AI, data leaks, and AI-specific attacks.
OneLogin, a popular identity and access management platform, had vulnerabilities that exposed user credentials. Through simple probing, researchers were able to access a host of sensitive data…
It is no secret in 2025 that AI can be abused to launch attacks by threat actors. But the “how” and “why” of these use cases is continuing to change. A recent security report revealed many of the ways in which OpenAI’s ChatGPT could be exploited.
In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more…
Computers going rogue used to be the stuff of science fiction. But in 2025, it is becoming real. Join us in this blog as we investigate some cases where Artificial Intelligence has behaved like it has a mind of its own…
If you prefer to be notified of new posts via email, simply subscribe to our blog below.