The gap between what the regulation requires and what most compliance programs have actually built is wider than most CISOs and GRC leaders are prepared to admit. But how can we close it?

What the EU AI Act demands
The EU AI Act classifies AI according to risk. Unacceptable risk is prohibited outright. High-risk AI systems are heavily regulated. Limited-risk systems face transparency obligations.
The majority of obligations fall on providers, though deployers carry meaningful obligations too. If your organisation builds AI, buys AI, or integrates AI into operational processes, the Act applies to you. If your systems serve EU users and you are headquartered outside the EU, it still applies to you.
For high-risk AI systems, providers must establish a risk management system throughout the system's lifecycle, conduct data governance to ensure training and testing datasets are relevant and representative, produce technical documentation demonstrating compliance, design systems to allow for human oversight, and achieve appropriate levels of accuracy, robustness, and cybersecurity.
The Act treats security as foundational to compliance.
The compliance gap is a security gap
Most organisations approaching the EU AI Act treat it as a governance and legal challenge. They produce AI registers, draft risk classification matrices, and build working groups. That work has value. But it systematically misses the deeper problem.
The compliance gap is a security gap. The same reasons that make AI systems hard to secure are the reasons they are hard to govern. You cannot log what you cannot see. You cannot classify what you have not discovered. And you cannot demonstrate to a regulator that your controls are working if those controls only exist as policies on paper.
More than 80% of employees use AI tools that have not been approved by their organisation. The AI systems that appear in your register and the AI systems that are actually operating in your environment are different populations.
Shadow AI is the dominant reality of how AI is being adopted at scale. Any compliance program that relies on self-reporting to build its inventory has already accepted an undercount of its exposure.
The logging mandate is a technical obligation
Article 12 of the EU AI Act requires that high-risk AI systems technically allow for the automatic recording of events over the lifetime of the system.
Technically means the capability must be built into or applied to the system itself. Automatic means logs are generated without operator intervention at the moment events occur. Lifetime means from deployment to decommissioning.
Article 26 requires automatically generated logs to be retained for a minimum of six months. The organisations that will be best positioned when enforcement begins are not the ones that start building logging infrastructure in July 2026. They are the ones generating six months of compliant, continuous, tamper-evident logs already. If you wait till the enforcement date, you are already behind.
Prohibited practices are already enforceable
The prohibited AI practices under Chapter II became enforceable in February 2025. However, the enforcement deadline that most organisations are focused on is August 2026.
Compliance with the prohibited practices provisions is a matter of ensuring that systems do not drift into prohibited behaviour. A system that was not designed to manipulate can evolve into one that does. Detecting that change requires continuous behavioural monitoring.
The GDPR parallel
Organisations that lived through GDPR's May 2018 enforcement date will recognise what is coming. In the months before that deadline, many organisations had produced detailed documentation: data processing registers, privacy notices, breach notification procedures.
On paper, they were prepared. In practice, many discovered that their processes did not work, their data maps were incomplete, and their policies had never been technically enforced.
The organisations that struggled most under GDPR were those that had treated compliance as a documentation exercise rather than an operational transformation. The EU AI Act presents the same dynamic, with two important differences.
1. Its technical obligations are more demanding than GDPR's
2. Its fine structure is more severe. Violations of the prohibited practices provisions can be more expensive than even the most serious GDPR breaches.
What closing the gap requires
Bridging the EU AI Act compliance gap requires a shift from periodic assurance to continuous control and continuous automated discovery of AI usage across cloud infrastructure, browser-based activity, and application-layer integrations. You need to know about every AI system in your environment, including the ones nobody approved.
It requires automated risk classification that maps discovered systems against the Act's risk tiers in real time — not at the next quarterly audit cycle. The Act's obligations follow from classification, so classification needs to be live.
It requires centralised logging that captures every relevant interaction with high-risk AI systems automatically, retains logs for the mandated minimum of six months, and makes those logs available for regulatory review on demand.
It requires real-time behavioural monitoring that detects patterns approaching prohibited practice thresholds, anomalous outputs that may signal misuse, and adversarial inputs designed to subvert system behaviour.
And it requires technical policy enforcement at the point of use. A governance policy that prohibits certain AI uses but has no technical mechanism preventing them is not a control.
The question that matters
The question is not whether you have completed your AI Act checklist. It is whether you could answer a regulator's questions about your AI systems today, and whether the infrastructure you have built is capable of answering those questions six months from now.
If the answer is uncertain, the gap is real. And the time to close it is before enforcement, not after.
Need help with your compliance? FireTail is here: https://www.firetail.ai/schedule-your-demo