Article 12 does not ask whether you intend to keep logs. It requires that high-risk AI systems technically allow for automatic recording from day one. Here is what that means in practice.

When GDPR arrived, the organisations that had mistaken documentation for capability were the ones that struggled the most. They had policies about data retention but no technical controls enforcing those policies. They had breach notification procedures but no systems capable of detecting a breach in time to use them.
The EU AI Act is heading for a similar reckoning. And Article 12 is where most organisations will feel it first.
High-risk AI systems shall technically allow for the automatic recording of events over the lifetime of the system.
Article 26(6) requires automatically generated logs to be retained for a minimum of six months. For biometric identification systems, additional specific data must be captured including precise usage periods, reference databases consulted, and the identities of individuals responsible for verifying results.
The first question many organisations ask is whether Article 12 applies to them. The answer, for most enterprises using AI in operational contexts, is yes.
Under Annex III of the Act, high-risk includes any operation where AI affects hiring, finances, access, healthcare, resource allocation, or fundamental rights. This covers recruitment screening tools, credit and insurance models, employee performance management systems, customer service AI with access to account data, and healthcare triage or administration tools.
The regulation draws a clear line between providers, who build and place AI systems on the market, and deployers, who use those systems within their own operations. Most European enterprises are deployers. Deployers must ensure that logs are kept in formats suitable for analysis and must retain them in a way that supports regulatory review and investigation.
If you are a deployer using a third-party AI system, the obligation to ensure logging is in place does not disappear. You need to verify that the systems you use can generate the required logs, and that those logs are accessible to you when needed.
Based on what we see across enterprise environments, these are the most common Article 12 failures:
There is a practical metric that every CISO and GRC leader should apply to their organisation's AI readiness. How long does it take to produce a complete, verified inventory of all AI systems currently in use across your environment?
If the answer is days or weeks, you are working from a compliance model that cannot keep pace with how AI is actually being adopted inside your organisation. If the answer is never, or only through a manual survey process, you have a fundamental gap.
FireTail deploys automated discovery across cloud infrastructure, browser-based activity, and application-layer integrations. Within 15 minutes, you have a living inventory. That inventory drives everything else, automatic log capture from every discovered system, centralised retention with tamper-evident storage, real-time alerting on anomalous activity, and the audit-ready reporting that demonstrates compliance to regulators.
FireTail captures the specific data Article 12 requires for high-risk systems. Interaction timestamps, input data classifications, output records, and human review events. Logs are centralised, retained, and exportable for regulatory review.
The EU AI Act entered into force on August 1, 2024. The full obligations for high-risk AI systems become applicable on August 2, 2026. Prohibited practices have been enforceable since February 2025.
National Competent Authorities across EU member states will move into active enforcement mode after that August 2026 date.
The organisations that will be best positioned have automated, continuous logging in place now, generating the six months of retained audit trail the regulation requires before enforcement begins. If you start your logging program the day the Act is enforced, you are already behind.
Article 12 reflects what the regulation is actually trying to achieve: the ability to understand, retrospectively and in real time, what high-risk AI systems are doing and what impact they are having. Manual documentation is no longer enough.