All articles
Compliance··8 min read

The EU AI Act Is 98 Days Away. What Your Company Needs to Do Now.

Article 12 of Regulation (EU) 2024/1689 becomes binding on 2026-08-02. Here is the plain-English translation, the four things that matter, and how to tell whether your engineering team has it handled.

On 2 August 2026, a specific part of the EU AI Act becomes binding. It is 98 days away from the publication of this article. If your company runs any kind of AI agent that touches real customers, real money, or real personal data, one clause of Regulation (EU) 2024/1689 is going to land on your desk. The clause is Article 12. It is about keeping logs. And several teams I have spoken with have either not started or are quietly assuming their existing monitoring tools already cover it.

This piece is written for the compliance officer, the in-house lawyer, or the head of operations who has been forwarded a deck about the EU AI Act and needs to know what to actually do. Nothing in this article is legal advice. It is a plain-English translation of what Article 12 asks for, the four practical things that matter most, and a short list of questions a non-technical person can put to an engineering team to tell whether the work is really being done.

What Article 12 actually says, in plain English

Article 12 of the EU AI Act applies to what the regulation calls high-risk AI systems. Whether any particular system your company runs falls into that category is a question for your lawyer, and one of the first things to put on their plate this week. What the clause itself says, in ordinary language, is this.

If your company provides or deploys a high-risk AI system, the system has to keep an automatic log of what it does while it is running. The log has to make it possible to trace what the system did and why. It has to be good enough that, if something goes wrong, a regulator or an internal auditor can reconstruct the sequence of events. The log has to be kept for a reasonable period of time, and the baseline guidance the EU AI Act implementation timeline references is a minimum of six months.

That is the whole obligation, and the reason it sounds modest is because it is modest. The reason it is hard is that most AI agent deployments do not log the right things by default. The usual monitoring tools catch uptime, latency, and errors. They do not catch "at 14:32 on Tuesday, the agent made the decision to refund a customer $4,200 on the authority of a customer-service supervisor whose approval token was issued at 14:30." That second thing is what Article 12 is asking for.

Imagine your agent approves a $42,000 refund. The auditor asks who signed off, on what evidence, and when. What do you hand them? If the answer is "we would have to piece it together from three systems and a Slack thread," that is the gap Article 12 is closing.

The four things your company actually needs to do

Four obligations are worth holding in your head. They are not the only things the EU AI Act asks for, but they are the four that map directly onto Article 12, and they are the four that a non-technical reader can brief a CEO on in under ten minutes.

1. Keep a running log of what every AI agent did

The log has to be written by the system itself, as the system is working. Not reconstructed after the fact from application memory or customer-support notes. Every action the agent takes, every decision it makes, every tool it calls, every human who approved or denied a request: all of it lands in a log that was written at the moment it happened.

In practice, that means your engineering team has to have made a choice, a year or two ago, to capture agent activity in a structured way. If they have not, catching up is the work of a quarter, not a weekend. The first conversation to have this week is "do we have this today, yes or no."

2. Make sure the log cannot be silently edited

This is the one that most existing monitoring tools do not solve. If your log lives in a regular database, someone with database access can quietly change row 47, and nothing in the log will reveal that the change happened.

Tamper-evidence is the property you want. Tamper-evidence means that if someone rewrites row 47 of the log, the rest of the log proves it happened. Article 12 does not explicitly require this. A regulator who asks "how do you know your logs have not been modified" will land differently on an answer of "the log is structured so any later edit is visible" than on an answer of "we trust our database administrators."

You do not need to understand the cryptography. You do need to ask whether your team has it.

3. Keep the log for at least six months

Six months is the floor the implementation timeline points to. Many companies will want to keep logs longer, especially in regulated industries where other obligations (financial record-keeping, medical records retention, insurance claim windows) run to seven years or more. The EU AI Act's requirement sits on top of whatever you already have, not in place of it.

The cost of six months of agent logs is, for most companies, negligible. A mid-market company generating ten thousand agent decisions a day is looking at roughly a gigabyte a year of structured data. That is a rounding error on any cloud bill. What is not a rounding error is the cost of rebuilding the log six months in when nobody wrote it down correctly in the first place.

4. Be able to hand the regulator something they accept

Having the log is one thing. Being able to produce it, in a form a regulator accepts, is another. This is the part most teams under-plan.

The posture you want is: a named person on your team can, on a reasonable deadline, export the last six months of agent activity to a file the regulator can read, together with some form of attestation that the file has not been altered in transit. For most companies, that file is a spreadsheet of decisions plus a signed summary from the compliance officer. For companies with better tooling, the signature is cryptographic and the attestation is automatic. Either shape is defensible. What is not defensible is "we would need to build that when the request comes in."

How to tell whether your engineering team has it handled

You do not need an engineering background to tell whether an engineering team is on top of Article 12. You need five questions and a sense of what a good answer sounds like. Ask the engineer who owns your agent infrastructure to spend thirty minutes walking through these.

  1. If I ask you today to show me every decision our customer-service agent made last Tuesday, how long does it take? A good answer sounds like: a few minutes, filtered by date and agent, here is what the result looks like. A handwave answer sounds like: we would have to pull logs from a few different places and cross-reference them.
  2. If someone with database access quietly changed one row in the log last month, would we find out? A good answer sounds like: yes, and here is the mechanism that would catch it. A handwave answer sounds like: we trust our team and we have access controls.
  3. What happens to the log if the hosted dashboard we use goes down? A good answer sounds like: the log keeps being written to our own database, the dashboard is a view on top. A handwave answer sounds like: we would lose some events until the dashboard came back.
  4. Can you export the last six months of agent activity to a file a regulator would accept? A good answer sounds like: yes, here is the format, and here is the signature file that comes with it. A handwave answer sounds like: we would need to build that feature.
  5. Who specifically on our team owns the log and the export, when a regulator calls? A good answer sounds like: a named person, with a documented runbook and a backup. A handwave answer sounds like: engineering would own that, we have not worked out the details yet.

If four or five of these answers come back in the handwave shape, you have a real gap. That is useful information. The EU AI Act is not the last AI regulation that is going to land on your company's desk, and the time to build this muscle is now.

There are other good frameworks worth knowing about, even if they are not binding. The NIST AI Risk Management Framework is the US-side voluntary guidance that most US enterprise compliance teams are also weaving into their programmes. It is not law, but it is the direction several regulators are orienting toward. Worth having on your radar, not worth losing sleep over this quarter.

How we think about this at Code Atelier

One honest paragraph about what we do, then we get to the action list. At Code Atelier, we build a governance toolkit for AI agent deployments. It writes the kind of log Article 12 asks for to whatever database your engineering team already runs, with tamper-evidence built in, and it produces a signed evidence pack your compliance officer can hand to a regulator. It ships as a Python library your engineering team can install in minutes. If you want the technical write-up, our governance page is the right place to send your engineer, and the v0.7.2 engineering post from yesterday describes the architecture in the detail an engineering reviewer will ask for. None of that is a pitch to you directly. You are not the buyer of a toolkit; you are the person who asks whether the toolkit exists and does what the regulation needs.

What to do this week

If the deadline has been on your desk for a while and you have not moved on it, here is the shortest list of concrete actions that gets you from "nothing" to "materially prepared" in a week of real calendar time.

  • Ask engineering for a one-line answer to the question "what AI agents do we run in production, and does each one write to a log we could hand a regulator." Put the answer in writing.
  • Forward Regulation (EU) 2024/1689, Article 12, to your outside or in-house counsel and ask for a written view on which of your systems count as high-risk under the Act. This is the only question in this piece that you genuinely cannot answer yourself.
  • Schedule a thirty-minute walk-through with the engineer who owns agent infrastructure. Run the five questions in the section above. Write down the answers.
  • Put 2026-08-02 on the compliance calendar with a 60-day, 30-day, and 7-day reminder. Your future self will thank you.
  • Decide now who on your team owns the log, the export, and the regulator-facing attestation when a request lands. Write the name in a document that is not just your own notes. Backup person too.

None of those items takes more than an hour. The whole list is a week of meetings and a few memos. The hardest part is starting, which is why the piece is here.

A quick word on what this is and is not

This is not legal advice. If the EU AI Act is in scope for your company, your lawyer is the person who tells you which of your systems are high-risk, what the exact retention obligations look like in your jurisdiction, and what the penalty exposure is. This piece is the primer that helps you ask your lawyer better questions, not a replacement for them.

If you want to compare notes on the checklist, or if you are staring at one of the five questions above and not sure what a good answer looks like for your setup, I am easy to reach. The contact form at the top of the site goes directly to my inbox.

Frequently Asked Questions

Do we need to keep logs even if our AI agent is just for internal use?

Almost certainly, yes. The EU AI Act's obligations attach to high-risk AI systems regardless of whether the users are your customers or your own employees. An internal agent that makes decisions about hiring, access to benefits, performance evaluation, or financial approvals is very likely in scope. A chatbot that drafts emails for an internal user is very likely not. Your lawyer is the person who tells you which of your internal systems sit on which side of that line; what this article can tell you is that "internal-only" is not, by itself, a safe assumption.

What is the penalty for not complying with Article 12?

The EU AI Act sets financial penalties that can reach tens of millions of euros or a percentage of global annual turnover, whichever is higher, for the most serious obligations. Article 12 is not in the highest penalty tier (which is reserved for prohibited practices) but it is in a serious one. Beyond the fine, the reputational and contractual fallout of being the first company in your sector to land in an enforcement headline is usually a larger problem than the fine itself. Your lawyer can walk you through the exact numbers that apply to your company's size and posture.

How long do we have to keep the logs?

The baseline most companies plan around is six months, which is the floor the published implementation timeline references. Many industries have longer retention obligations under other regulations (financial services, healthcare, insurance) that will dominate the EU AI Act's floor. The practical guidance is: find out what your longest existing retention obligation is, make sure it also covers AI agent activity, and you are already compliant on the retention dimension. If you have no existing obligation, plan for a year, not six months. Storage is cheap; rebuilding a missing log is not.

Is our existing Splunk or Datadog setup good enough?

Usually not on its own, though it can be part of the answer. Tools like Splunk and Datadog are designed around uptime, latency, and errors. They do an excellent job of telling you whether the agent was running and how fast it responded. They do not, out of the box, capture the shape of information Article 12 is asking for: which decision the agent made, on whose authority, against what inputs, and with what outcome. You typically need a second layer of structured activity logging, specific to agent decisions, that sits alongside your existing monitoring. The good news is that it writes to the same kind of database you already have.

What does "high-risk AI system" mean for us?

The EU AI Act defines high-risk AI systems by use case rather than by technology. The categories include things like credit scoring, recruitment and HR decisions, medical devices, critical infrastructure, biometric identification, law enforcement, education access, and several others. Whether any particular system your company runs is classified as high-risk depends on what it does, not on whether it uses a large language model or a classical algorithm. This is the single question in this whole piece that your lawyer has to answer for you, and it is the one worth scheduling a meeting for this week.

Code Atelier · NYC

Ready to get agent-ready before your competitors do?

Let's talk