On 2 August 2026, a specific part of the EU AI Act becomes binding. It is 98 days away from the publication of this article. If your company runs any kind of AI agent that touches real customers, real money, or real personal data, one clause of Regulation (EU) 2024/1689 is going to land on your desk. The clause is Article 12. It is about keeping logs. And several teams I have spoken with have either not started or are quietly assuming their existing monitoring tools already cover it.
This piece is written for the compliance officer, the in-house lawyer, or the head of operations who has been forwarded a deck about the EU AI Act and needs to know what to actually do. Nothing in this article is legal advice. It is a plain-English translation of what Article 12 asks for, the four practical things that matter most, and a short list of questions a non-technical person can put to an engineering team to tell whether the work is really being done.
What Article 12 actually says, in plain English
Article 12 of the EU AI Act applies to what the regulation calls high-risk AI systems. Whether any particular system your company runs falls into that category is a question for your lawyer, and one of the first things to put on their plate this week. What the clause itself says, in ordinary language, is this.
If your company provides or deploys a high-risk AI system, the system has to keep an automatic log of what it does while it is running. The log has to make it possible to trace what the system did and why. It has to be good enough that, if something goes wrong, a regulator or an internal auditor can reconstruct the sequence of events. The log has to be kept for a reasonable period of time, and the baseline guidance the EU AI Act implementation timeline references is a minimum of six months.
That is the whole obligation, and the reason it sounds modest is because it is modest. The reason it is hard is that most AI agent deployments do not log the right things by default. The usual monitoring tools catch uptime, latency, and errors. They do not catch "at 14:32 on Tuesday, the agent made the decision to refund a customer $4,200 on the authority of a customer-service supervisor whose approval token was issued at 14:30." That second thing is what Article 12 is asking for.
Imagine your agent approves a $42,000 refund. The auditor asks who signed off, on what evidence, and when. What do you hand them? If the answer is "we would have to piece it together from three systems and a Slack thread," that is the gap Article 12 is closing.
The four things your company actually needs to do
Four obligations are worth holding in your head. They are not the only things the EU AI Act asks for, but they are the four that map directly onto Article 12, and they are the four that a non-technical reader can brief a CEO on in under ten minutes.
1. Keep a running log of what every AI agent did
The log has to be written by the system itself, as the system is working. Not reconstructed after the fact from application memory or customer-support notes. Every action the agent takes, every decision it makes, every tool it calls, every human who approved or denied a request: all of it lands in a log that was written at the moment it happened.
In practice, that means your engineering team has to have made a choice, a year or two ago, to capture agent activity in a structured way. If they have not, catching up is the work of a quarter, not a weekend. The first conversation to have this week is "do we have this today, yes or no."
2. Make sure the log cannot be silently edited
This is the one that most existing monitoring tools do not solve. If your log lives in a regular database, someone with database access can quietly change row 47, and nothing in the log will reveal that the change happened.
Tamper-evidence is the property you want. Tamper-evidence means that if someone rewrites row 47 of the log, the rest of the log proves it happened. Article 12 does not explicitly require this. A regulator who asks "how do you know your logs have not been modified" will land differently on an answer of "the log is structured so any later edit is visible" than on an answer of "we trust our database administrators."
You do not need to understand the cryptography. You do need to ask whether your team has it.
3. Keep the log for at least six months
Six months is the floor the implementation timeline points to. Many companies will want to keep logs longer, especially in regulated industries where other obligations (financial record-keeping, medical records retention, insurance claim windows) run to seven years or more. The EU AI Act's requirement sits on top of whatever you already have, not in place of it.
The cost of six months of agent logs is, for most companies, negligible. A mid-market company generating ten thousand agent decisions a day is looking at roughly a gigabyte a year of structured data. That is a rounding error on any cloud bill. What is not a rounding error is the cost of rebuilding the log six months in when nobody wrote it down correctly in the first place.
4. Be able to hand the regulator something they accept
Having the log is one thing. Being able to produce it, in a form a regulator accepts, is another. This is the part most teams under-plan.
The posture you want is: a named person on your team can, on a reasonable deadline, export the last six months of agent activity to a file the regulator can read, together with some form of attestation that the file has not been altered in transit. For most companies, that file is a spreadsheet of decisions plus a signed summary from the compliance officer. For companies with better tooling, the signature is cryptographic and the attestation is automatic. Either shape is defensible. What is not defensible is "we would need to build that when the request comes in."
How to tell whether your engineering team has it handled
You do not need an engineering background to tell whether an engineering team is on top of Article 12. You need five questions and a sense of what a good answer sounds like. Ask the engineer who owns your agent infrastructure to spend thirty minutes walking through these.
- If I ask you today to show me every decision our customer-service agent made last Tuesday, how long does it take? A good answer sounds like: a few minutes, filtered by date and agent, here is what the result looks like. A handwave answer sounds like: we would have to pull logs from a few different places and cross-reference them.
- If someone with database access quietly changed one row in the log last month, would we find out? A good answer sounds like: yes, and here is the mechanism that would catch it. A handwave answer sounds like: we trust our team and we have access controls.
- What happens to the log if the hosted dashboard we use goes down? A good answer sounds like: the log keeps being written to our own database, the dashboard is a view on top. A handwave answer sounds like: we would lose some events until the dashboard came back.
- Can you export the last six months of agent activity to a file a regulator would accept? A good answer sounds like: yes, here is the format, and here is the signature file that comes with it. A handwave answer sounds like: we would need to build that feature.
- Who specifically on our team owns the log and the export, when a regulator calls? A good answer sounds like: a named person, with a documented runbook and a backup. A handwave answer sounds like: engineering would own that, we have not worked out the details yet.
If four or five of these answers come back in the handwave shape, you have a real gap. That is useful information. The EU AI Act is not the last AI regulation that is going to land on your company's desk, and the time to build this muscle is now.
There are other good frameworks worth knowing about, even if they are not binding. The NIST AI Risk Management Framework is the US-side voluntary guidance that most US enterprise compliance teams are also weaving into their programmes. It is not law, but it is the direction several regulators are orienting toward. Worth having on your radar, not worth losing sleep over this quarter.
How we think about this at Code Atelier
One honest paragraph about what we do, then we get to the action list. At Code Atelier, we build a governance toolkit for AI agent deployments. It writes the kind of log Article 12 asks for to whatever database your engineering team already runs, with tamper-evidence built in, and it produces a signed evidence pack your compliance officer can hand to a regulator. It ships as a Python library your engineering team can install in minutes. If you want the technical write-up, our governance page is the right place to send your engineer, and the v0.7.2 engineering post from yesterday describes the architecture in the detail an engineering reviewer will ask for. None of that is a pitch to you directly. You are not the buyer of a toolkit; you are the person who asks whether the toolkit exists and does what the regulation needs.
What to do this week
If the deadline has been on your desk for a while and you have not moved on it, here is the shortest list of concrete actions that gets you from "nothing" to "materially prepared" in a week of real calendar time.
- Ask engineering for a one-line answer to the question "what AI agents do we run in production, and does each one write to a log we could hand a regulator." Put the answer in writing.
- Forward Regulation (EU) 2024/1689, Article 12, to your outside or in-house counsel and ask for a written view on which of your systems count as high-risk under the Act. This is the only question in this piece that you genuinely cannot answer yourself.
- Schedule a thirty-minute walk-through with the engineer who owns agent infrastructure. Run the five questions in the section above. Write down the answers.
- Put 2026-08-02 on the compliance calendar with a 60-day, 30-day, and 7-day reminder. Your future self will thank you.
- Decide now who on your team owns the log, the export, and the regulator-facing attestation when a request lands. Write the name in a document that is not just your own notes. Backup person too.
None of those items takes more than an hour. The whole list is a week of meetings and a few memos. The hardest part is starting, which is why the piece is here.
A quick word on what this is and is not
This is not legal advice. If the EU AI Act is in scope for your company, your lawyer is the person who tells you which of your systems are high-risk, what the exact retention obligations look like in your jurisdiction, and what the penalty exposure is. This piece is the primer that helps you ask your lawyer better questions, not a replacement for them.
If you want to compare notes on the checklist, or if you are staring at one of the five questions above and not sure what a good answer looks like for your setup, I am easy to reach. The contact form at the top of the site goes directly to my inbox.