AI Incident Tracker

Artificial intelligence systems are rapidly becoming integral to our world, but with increased reliance comes heightened risk. This tracker compiles incidents of AI-related breaches, leaks, and vulnerabilities to raise awareness and encourage proactive security measures. From model exploits to data leaks involving AI systems, stay informed about the latest risks and lessons learned in this evolving landscape.

This tracker is maintained as a resource for researchers, developers, and organizations to understand the security challenges associated with AI. Each entry includes details about the incident, its impact, and key takeaways. The information is sourced from verified reports and will be updated regularly.

Report an AI Incident

If you want to add a known AI breach, leak or vulnerability to our tracker, you can send us the information via our 'AI Breach Tracker Submission Form'. We'll validate and confirm the event before adding to the database.

Submit a Breach

Advanced AI Security

There is no AI without APIs. AI is enabling the next wave of digital innovation. APIs are the only way to drive AI adoption scalably and sustainably. Don't let a lack of visibility, monitoring and security controls hold your organization back.

Learn More

Quick Facts

AI Incident Tracker with FireTail

Updated regularly to reflect emerging AI threats.

  • Tracks real-world AI security incidents, breaches, and vulnerabilities
  • Covers model exploits, data leakage, prompt abuse, and insecure integrations
  • Sources incidents from verified public disclosures and security research
  • Designed for security teams, developers, and risk leaders

AI Incident Examples

Chef's Kiss - AI Companion Site Breach
A platform offering AI-generated "girlfriends" suffered a significant data breach, exposing private user data, including personal conversations and fantasies. The attackers exploited a weak authentication mechanism, allowing them to access and exfiltrate sensitive data stored in the system. The breach raises concerns about the privacy implications of AI systems designed for intimate interactions and underscores the need for stricter safeguards in handling user-generated content.

Mudler Time - AI Vulnerability
A timing attack vulnerability was discovered in Mudler’s LocalAI version 2.17.1. This exploit allows attackers to deduce sensitive information, such as valid API keys or passwords, by analyzing the response times of the server during authentication attempts. The flaw stems from discrepancies in the time taken to validate incorrect versus correct credentials, enabling attackers to iteratively guess the correct inputs. If exploited, it could lead to unauthorized access to systems relying on this AI tool for authentication.

IDOR in Lunary - AI Vulnerability
In version 1.3.2 of Lunary AI, an Insecure Direct Object Reference (IDOR) vulnerability (CVE-2024-7474) was identified, allowing unauthorized access to sensitive user data. By manipulating the id parameter in the request URL, attackers could view or delete external user accounts without authorization. This flaw resulted from inadequate validation of the id parameter, leaving external user information exposed. The issue has been categorized as critical, with a CVSS score of 9.1, underlining the potential for severe impact if exploited.

Ollama Drama - Multiple AI Vulnerabilities
A vulnerability in Ollama, detailed in Oligo Security's blog, highlighted the risks of exposing AI model prompts and context data during multi-model integrations. This issue arises when poorly configured models inadvertently leak sensitive user data through unencrypted channels or logging mechanisms. Specifically, in Ollama's case, data from prompt engineering experiments was accessible without adequate restrictions, enabling unauthorized users to extract sensitive information or manipulate outcomes. This incident underscores the need for robust access controls and encryption when deploying interconnected AI models

Syntax Error - AI Crypto Breach
TA vulnerability in the Syntax smart contract platform, identified by Spectral Labs, allowed attackers to exploit logic errors in the code, enabling unauthorized transactions. In response, the platform was forced to pause its operations, temporarily freezing users' assets. The vulnerability allowed hackers to make off with $200K in crypto transactions. This incident highlights the critical importance of AI security audits in AI-driven blockchain platforms to prevent catastrophic misuse.

You Only Load Once - AI Supply Chain Attack
Hackers hijacked an Ultralytics AI model by injecting malicious payloads during a public model-sharing session. The compromised model was downloaded by unsuspecting developers, infecting thousands of systems with cryptomining malware. This breach underscores the danger of downloading unverified machine learning models and highlights the need for secure distribution channels in the AI ecosystem.

Disclaimer: This document includes links to multiple third-party publications and websites. FireTail is not responsible for any external content. Additionally, analysis of the data breaches contained here is based on best effort understanding of the information made available. Disclosures often do not include all the relevant information or details. Information provided is for educational / illustrative purposes only.

Why AI Security Matters

AI systems introduce new security risks that traditional controls were never designed to handle. Vulnerabilities can emerge from model behavior, training data exposure, prompt manipulation, insecure APIs, and poor access controls.

As organizations rely more heavily on AI for decision-making and automation, security gaps can lead to data leaks, system misuse, regulatory exposure, and loss of trust. Proactive AI security helps teams identify risks early, respond to incidents faster, and deploy AI responsibly at scale.

Frequently Asked Questions About AI Incident Tracker

Find answers to common questions about protecting AI models, APIs, and data pipelines using FireTail’s AI Security solutions

What is an AI security incident?

An AI security incident occurs when an AI system is compromised through data exposure, model misuse, or insecure integrations. FireTail tracks these incidents to help organizations understand real-world AI risks.

How are AI breaches different from traditional security breaches?

AI breaches often involve indirect risks like prompt abuse, model behavior flaws, or unintended data exposure through outputs. FireTail monitors these AI-specific threats that traditional security tools often miss.

Who should monitor AI security incidents?

Security teams, developers, and organizations deploying AI in production should monitor AI incidents. FireTail provides visibility into emerging risks across AI models and APIs.

How does FireTail help with AI security?

FireTail continuously tracks AI breaches, vulnerabilities, and exploit patterns, helping teams identify risks early and strengthen AI and API security controls.