EU AI Act: Practical implications for municipalities and public bodies
Why the EU AI Act matters for municipalities
The European Union’s Artificial Intelligence Regulation (EU AI Act) creates obligations that affect both providers and public users of AI systems. This is not theoretical: many municipal applications (social benefits management, automated case selection, CCTV with analytics) fall into categories that trigger concrete requirements around safety, transparency and documentation. This post summarizes the practical implications and lays out concrete steps a local authority can take right away.
What types of obligations appear and when do they apply?
The EU AI Act structures obligations by levels of risk:
- Prohibited systems (e.g., subliminal manipulation) — usually not relevant in everyday municipal practice.
- High-risk systems — strict requirements: risk management, data governance, technical documentation, incident logging, human oversight, robustness and accuracy testing, and conformity assessment.
- Systems with specific obligations (transparency) — e.g., chatbots that interact with citizens must make it clear they are not human.
- Low or minimal risk — limited requirements.
For public administrations, the primary focus is typically on high-risk systems and transparency obligations when there is citizen interaction.
Typical municipal cases and quick classification
- Scoring systems for social aid or beneficiary selection: very likely high risk.
- Automated decisions that affect rights or benefits: high risk and also subject to the GDPR (impact assessments).
- CCTV with facial recognition: often prohibited or heavily restricted in the EU; check local rules.
- Informational chatbots: transparency obligation (notice that the user is interacting with an automated system).
- Predictive analysis for urban planning (without individual decisions): can be low risk, but review the use of personal data.
Practical compliance steps (operational checklist)
-
Build an AI inventory (1–2 weeks)
- List all tools that incorporate AI, noting provider, purpose, data used and whether they make automated decisions.
- Assign an internal owner for each tool.
-
Risk classification and prioritization (1 week)
- Use a simple template: impact on rights, level of automation, data sensitivity.
- Prioritize high-risk systems for immediate measures.
-
Preliminary assessment and legal/IT coordination (2–4 weeks)
- For systems using personal data, link the assessment to the GDPR (DPIA - Data Protection Impact Assessment).
- Involve the Data Protection Officer and the ENS security officer (RD 311/2022).
-
Implement an AI-specific risk management system
- Define data governance controls (origin, quality, representativeness).
- Establish traceability requirements and model versioning records.
- Define periodic tests for bias, accuracy and robustness.
-
Technical documentation and conformity dossier
- Maintain a technical data sheet for each system: purpose, performance metrics, usage limits, test results and mitigation measures.
- Prepare incident protocols (integrity loss, erroneous outputs) and an internal notification system.
-
Procurement and contract clauses (Law 9/2017)
- Include contractual requirements on liability, access to code or technical explanations, audit rights and the obligation to provide documentation needed for conformity assessment.
- Set acceptance criteria based on conformity tests and integration testing.
-
Transparency toward citizens
- Provide clear notices when the system interacts with the public: purpose, right to request human review and how to file complaints.
- Publish a public registry of the entity’s AI systems (functional summary and responsible contact).
-
Prepare for conformity assessments
- Depending on the type of high-risk system, the assessment may require third-party review. Plan timeline and budget accordingly.
- Keep reproducible tests and test datasets (or reproducible descriptions).
Interoperability with ENS and GDPR
- ENS (Royal Decree 311/2022): apply the technical and organizational security measures required by the ENS to any AI deployment (access control, encryption, incident management).
- GDPR: when processing personal data, conduct a DPIA and ensure rights (access, rectification, objection). Compliance with the EU AI Act complements but does not replace GDPR obligations.
Practical example (brief)
A municipality wants to implement an automatic prioritization system to award grants to small businesses:
- Step 1: inventory and classify — high risk (economic impact and rights).
- Step 2: joint DPIA for GDPR + EU AI Act.
- Step 3: procurement requirements: training data, fairness metrics, decision logs, right to human review.
- Step 4: run tests in parallel with human decisions before full deployment.
OptimGov provides technical documentation templates and contract clauses that can speed up these steps in real projects.
Final recommendation: an actionable step
Immediate action (within 30 days): create the AI inventory and classify the five systems with the greatest impact. Appoint an owner for each system and plan a DPIA for those that are high risk.
Takeaway: Don’t wait for obligations to catch up with you. Identify, classify and put in place basic controls (inventory, DPIA, contractual clauses and transparency) to make compliance with the EU AI Act part of your entity’s operating cycle.