Saltar al contenido principal
Back to blog
Digital governmentAuditAI

AI Auditing and Transparency Obligations in Public Administration

March 15, 20264 min readOptimTech
Share:

The adoption of artificial intelligence systems by public administrations brings operational benefits, but also concrete auditing and transparency obligations. This article offers a practical approach for municipalities and public entities to be ready for internal and external inspections and to comply with legal requirements such as the GDPR, the ENS (RD 311/2022) and the obligations introduced by the EU AI Act.

Why does AI auditing matter in the public sector?

  • Legitimacy and trust: citizens expect clear explanations when administrative decisions rely on AI.
  • Regulatory compliance: the combination of the GDPR, ENS and sectoral rules requires documentation, traceability and security measures.
  • Operational resilience: regular audits detect bias, model degradation or security gaps before they cause harm.

Essential regulatory framework (practical summary)

  • GDPR: citizens' rights and the obligation to carry out impact assessments (DPIA/EIPD) when processing involves high risks to rights and freedoms.
  • EU AI Act: for systems classified as high-risk it requires technical documentation, risk management, log records and post-market monitoring.
  • ENS (RD 311/2022): security controls, system classification, incident management and requirements for cloud and networked systems.
  • Law 9/2017 on Public Sector Contracts: requires transparency in tender documents and technical criteria; the inclusion of AI in award criteria must be documented.
  • Law 38/2003 on Subsidies: requires justification for the awarding and use of subsidies; the use of AI in selection or justification processes must be traceable and auditable.

Operational checklist to be ready for an AI audit

  1. Inventory and classification

    • Record all systems that incorporate AI components (in-house software, SaaS, third-party models).
    • Classify by risk (automated decisions affecting rights, economic impact, health, safety).
  2. Technical and administrative documentation

    • System profile: purpose, scope, stakeholders, data sources, output criteria.
    • Technical documentation required by the EU AI Act for high-risk systems: model specifications, datasets, performance metrics, robustness tests.
    • Copies of contracts and tender documents (Law 9/2017) when AI participates in procurement or evaluation processes.
  3. Data lineage

    • Record source, timestamps, transformations and retention.
    • Metadata that allows reproducing the inputs used in specific decisions.
  4. Logging and decision records

    • Immutable logs including: model version, input, output, reasons/explanations, identifier of the user who supervised or executed the action, timestamp.
    • Retention periods in accordance with the GDPR and applicable regulations.
  5. Impact assessments and risk management

    • Updated DPIAs (EIPDs where applicable) for high-risk processing.
    • Register of mitigation measures and follow-up on results.
  6. Security and continuity controls (ENS)

    • Classification and measures according to RD 311/2022 (access management, encryption, backups).
    • Incident response procedures involving models or data.
  7. Transparency and citizens' rights

    • Accessible reports explaining the use of AI in key services.
    • Procedures for handling information requests, rectification and the exercise of other GDPR rights.
  8. Internal audit and third-party access

    • Mechanisms for auditors to access documentation, logs and test environments without compromising personal data.
    • Contracts with providers that include audit clauses and the obligation to provide technical evidence.

Minimum artifacts an entity must be able to deliver

  • An up-to-date inventory of AI systems and their technical profiles.
  • Decision logs retained for at least the legally required period.
  • DPIAs/EIPDs with implemented mitigation measures.
  • Model cards and datasheets documenting data, training and limitations.
  • Records of validation tests and bias/robustness testing.
  • Contracting records and clauses that specify the use of AI in public procedures.

Practical steps to get started in 90 days

  1. Weeks 1–4: Carry out a quick inventory and classify systems by risk.
  2. Weeks 5–8: Implement minimum logging on priority systems and ensure secure retention.
  3. Weeks 9–12: Draft the first system profiles, run DPIAs for the two highest-risk systems and schedule an internal audit.

Mature tools and solutions can speed up the work; platforms like OptimGov integrate inventory, traceability and documentation generation for audits, aligning with ENS and transparency requirements.

Recommended action (takeaway)

Start with an inventory and the implementation of decision logs: without traceability there can be no audit. Plan a DPIA for your highest-impact systems and ensure all supplier contracts include audit access. These three actions (inventory, logging, DPIA) are the minimum foundation to comply with the GDPR, ENS and the growing obligations of the EU AI Act.

If you need a practical guide to carry out the inventory and prioritize systems by risk, download or request a technical inventory template and a logging checklist for local administrations.