Blackhat Startup Spotlight Finalist - FireTail has been selected as one of 4 finalists at Blackhat this year.
FireTail has been selected as a finalist for the Blackhat USA 2025 Startup Spotlight Competition. We're delighted to be taking part and can't wait to showcase how FireTail helps enterprises discover, assess, and secure AI usage while preventing threats like shadow AI, data leaks, and AI-specific attacks.
It is no secret in 2025 that AI can be abused to launch attacks by threat actors. But the “how” and “why” of these use cases is continuing to change. A recent security report revealed many of the ways in which OpenAI’s ChatGPT could be exploited.
In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more…
Computers going rogue used to be the stuff of science fiction. But in 2025, it is becoming real. Join us in this blog as we investigate some cases where Artificial Intelligence has behaved like it has a mind of its own…
We’ve talked before about Mean Time To Attack, or MTTA, which has grown alarmingly short for new vulnerabilities across the cyber landscape. In this blog, we’ll dive into the “how” and “why” of this…
Cybersecurity risks are too close for comfort. Recent data from the Global Mobile Threat Report reveals that our mobile phone applications are most likely exposing our data due to insecure practices such as API key hardcoding.
The OWASP Top 10 List of Risks for LLMs helps developers and security teams determine where the biggest risk factors lay. In this blog series from FireTail, we are exploring each risk one by one, how it manifests, and mitigation strategies. This week, we’re focusing on LLM03: Supply Chain vulnerabilities.
Did you know that some AI chats capture and log your chat before you even submit it? This creates a huge security problem for both individuals and organizations whose employees use LLMs. Luckily, FireTail is working on a solution...
Our modern “Software as a Service” model is becoming a challenge for cybersecurity teams within large enterprises, as attacks continue to rise in volume and complexity across the cyber realm. Security needs to be a consideration from code to cloud, or any progress we make will be undone just as quickly.
OWASP’s Top 10 for LLM is a good starting point for teams to learn about AI security risks. In this series, we’ll go over each risk and practices to protect against them. Today, we’re tackling LLM02: Sensitive Information Disclosure.
FireTail, the leading AI & API security platform, has released its annual report, The State of AI & API Security 2025, revealing a critical blind spot in the way organizations are securing their AI investments. Despite record-breaking AI adoption, the report warns that most enterprises are overlooking the most exposed part of the AI stack: the API layer.
FireTail’s latest platform update gives customers expanded AI security features and discovery capabilities to better find, document and protect AI initiatives across your organization. Here, we look at what the update covers and the benefits these new features deliver for FireTail customers.
In this blog, we are taking a closer look at Prompt Injection, the #1 vulnerability on the OWASP Top 10 list of LLM risks in 2025. Join us in the first of this 10-part series as we examine the root causes of prompt injection, how prompt injection attacks are carried out, and the best methods to avoid them.
FireTail has been selected as a finalist for the Blackhat Asia 2025 Startup Spotlight Competition. We're delighted to be taking part and can't wait to showcase how FireTail helps enterprises discover, assess, and secure AI usage while preventing threats like shadow AI, data leaks, and AI-specific attacks.
Security teams today face a dual challenge: protecting AI systems from external threats while securing the APIs that power them. The reality is clear—if your APIs aren’t secure, neither is your AI.
A BreachForum user came out claiming to have breached OmniGPT and shared samples of stolen data to back up this claim. Weeks later, researchers are still scrambling to figure out the scope, attack method, and more.
Today’s cyber landscape is littered with threats, risks, and vulnerabilities. Every week, we are seeing an increase not only in attacks, but also in the methods used to attack. This week, a new family of malware was discovered exploiting Microsoft’s Graph API.
AI security and API security run alongside each other, much like a double rainbow. Each one contains a full spectrum of security requirements that work in tandem with one another.
AI is revolutionizing industries at an unprecedented pace. But as organizations integrate AI into their workflows, they are encountering serious security risks. In fact, 97% of organizations using generative AI have reported security incidents. Traditional security tools are failing to keep up, leaving companies vulnerable to data breaches, adversarial attacks, and compliance risks.
We’re only a month into the new year, and already, the internet is buzzing with news about AI. Most recently, China’s AI platform DeepSeek has been making whale-sized waves in the cyberworld. But a week after the launch of its new model on January 20th, a tsunami-sized wave hit the system.
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.