Generative AI in Public Administration: A Practical Guide to Safe and Responsible Uses
Why a practical approach to generative AI in the public sector
Generative models (LLMs) offer concrete benefits for local government: drafting documents, summarizing submissions, supporting citizen services, or generating minutes. But adopting them requires clear technical, organizational and legal controls to protect rights, ensure accountability and comply with rules such as the GDPR, the ENS (Royal Decree 311/2022) and public procurement obligations (Law 9/2017). This guide provides practical steps, example use cases and minimum controls to get started safely.
Classify uses: from low to high risk
Before deploying any tool, classify use cases by their impact on rights, legal effects and personal data exposure.
-
Low-risk cases (internal drafting assistance):
- Generating drafts of reports, internal minutes and agendas.
- Summarizing public documents that contain no personal data.
- Recommending administrative templates.
- Controls: mandatory human review, approved prompt templates.
-
Medium-risk cases (official communications and citizen services):
- Preformatted responses to citizen inquiries.
- Drafts of non-binding administrative decisions.
- Controls: data anonymization, interaction logs, human sign-off and legal verification.
-
High-risk cases (automated administrative decisions):
- Systems that directly and automatically affect rights or benefits.
- Automated assessments that determine eligibility for aid.
- Controls: determine whether the system falls under the EU AI Act, require impact assessments, establish clear responsibility lines and, where appropriate, avoid full automation.
Essential technical and operational controls
-
Data and prompt management
- Never include sensitive personal data in prompts without a legal basis and appropriate safeguards.
- Maintain approved prompt templates for official use; avoid ad-hoc prompts by staff.
- Version and audit important prompts and responses for traceability.
-
Privacy and data protection
- Carry out a Data Protection Impact Assessment (DPIA) when processing may pose a high risk (GDPR).
- Define roles: the data controller (the public entity) and processors (model providers), with contracts that specify subcontracting arrangements.
-
Security and sovereignty
- Comply with ENS (Royal Decree 311/2022) requirements on system classification, security measures and incident management.
- Prefer on-premises deployments or certified cloud providers for sensitive data; document access controls and encryption in transit and at rest.
-
Transparency and human oversight
- Inform citizens when AI-generated text is used in public communications.
- Establish human-in-the-loop controls: review and final sign-off by a competent official before any publication or administrative act.
-
Quality, verification and auditing
- Validate outputs periodically against reference corpora and accuracy metrics.
- Keep session logs, inputs and outputs for internal audit and regulatory requirements.
Procurement and practical legal obligations
When procuring generative model services (Law 9/2017):
- Specify in technical specifications:
- Functional scope, security requirements (ENS), data protection obligations (GDPR), rights of access and auditability.
- Traceability requirements (logging prompts/responses) and business continuity guarantees.
- Include technical evaluation criteria and acceptability metrics (for example, the percentage of responses that require review).
- Require clauses on subcontracting, intellectual property over prompts/artifacts and mitigation plans for identified biases.
Concrete example of a controlled deployment (municipal pilot)
- Use case: assisted generation of draft response letters to complaints.
- Minimum controls:
- Personal data anonymized in prompts.
- Approved and versioned prompt template.
- Human review and final sign-off by the responsible officer.
- Logs retained for auditing.
- A simple DPIA and contractual clauses with the provider regarding processing and subcontracting.
- Pilot KPIs: reduction in drafting time per document, percentage of edits required by reviewers, number of privacy incidents.
Governance and training
- Define an internal AI use policy that includes permitted uses, roles and sanctions.
- Train legal teams, technical staff and front-line citizen service personnel on:
- The risks of using prompts that contain personal data.
- Warning signs for incorrect or biased outputs.
- Procedures for escalating incidents.
Resources and regulatory compliance
- Check specific obligations under the EU AI Act according to your system’s classification (special attention to uses that affect fundamental rights).
- Comply with the GDPR for any processing of personal data (legal bases, DPIA, data subject rights such as access, rectification, erasure and objection).
- Apply ENS (Royal Decree 311/2022) to systems that handle sensitive or critical information.
- Use public procurement (Law 9/2017) to ensure providers meet technical and legal requirements.
Takeaway — 6-step checklist to get started safely
- Map and classify uses by risk (low/medium/high).
- Conduct a DPIA when appropriate and check EU AI Act obligations.
- Define prompt templates and clear rules prohibiting the inclusion of personal data.
- Require ENS controls, GDPR measures and audit rights in contracts.
- Implement human-in-the-loop and interaction logging for traceability.
- Start a limited pilot with KPIs, regular reviews and staff training.
Adopting generative AI in the public sector is feasible and valuable when responsible selection of use cases is combined with robust technical, legal and organizational controls. For entities seeking a practical risk assessment and an implementation plan, tools like the OptimGov Ready diagnostic can help prioritize concrete steps in line with the ENS and GDPR.