Saltar al contenido principal
Back to blog
AIHuman Resources

AI-Augmented Public Servants: How to Redesign Roles and Processes in Local Government

April 3, 20265 min readOptimTech
Share:

Why talk about “AI-augmented public servants” instead of just automation

Talking about AI-augmented public servants means designing collaboration between people and AI systems to improve decisions, efficiency and public service while preserving administrative accountability. In practice this isn’t about replacing roles, but redistributing tasks: letting AI handle classification, pre-screening or anomaly detection, and letting the person focus on legal assessment, professional judgement and citizen interaction.

This approach requires organizational changes, a compliance framework and a training plan. Below is a practical roadmap and concrete actions local councils and authorities can use to redesign roles with legal safety (ENS RD 311/2022, GDPR), and with public procurement (Law 9/2017) and grants management (Law 38/2003) criteria.

Phase 1 — Map tasks and decide the collaboration pattern

Before deploying AI, map processes and tasks in detail:

  • Identify candidate processes (e.g., processing of planning permissions, preliminary assessment of grant applications, citizen service for general information).
  • For each task, decide whether it will be:
    • Automated (no human intervention),
    • Augmented (AI generates outputs reviewed by an official),
    • Assisted (AI suggests, the person acts freely).
  • Define risk criteria: impact on rights, need for professional judgement, data sensitivity (rely on a Record of Processing Activities under the GDPR and ENS classification).

Practical tool: use a RACI chart per task + a risk/impact matrix to prioritize pilots.

Phase 2 — Redesign jobs and professional profiles

Updating job descriptions is key to avoid friction and labor risks:

  • Add new responsibilities: model supervision, validating outputs, incident logging and bias reviews.
  • Define competencies: data literacy, assessing explainability, working with human-machine interfaces.
  • Design hybrid profiles: for example a technical-legal role to review automated administrative decisions.
  • Plan engagement with unions and HR: early communication, agreements on responsibility and ongoing training.

Example: in the permits unit, create an “AI verifier” role responsible for reviewing files flagged by the system.

Phase 3 — Compliance, security and accountability

Adopting AI in the public sector requires meeting multiple frameworks:

  • GDPR: carry out Data Protection Impact Assessments (DPIAs) when processing may pose a high risk to rights and freedoms.
  • ENS RD 311/2022: classify assets and ensure security controls, continuity and incident management for deployed AI solutions.
  • EU AI Act: if the solution falls into high-risk categories (e.g., automated administrative decisions affecting rights), plan compliance with obligations on transparency, technical documentation and registration.
  • Public procurement (Law 9/2017): include clauses in tender documents on data traceability, audit rights, maintenance and explainability guarantees. For grants, verify documentary obligations under Law 38/2003.

Practical recommendation: integrate compliance requirements into project acceptance criteria and procurement documents.

Phase 4 — Pilots, metrics and human oversight

Design pilots that test the human-AI model with clear rules:

  • Select 1–2 processes with low initial impact and high volume.
  • Define operational metrics: average resolution time, rework rate, human-AI agreement, security incidents and complaints.
  • Establish human intervention thresholds: situations that always require human review (e.g., discrepancies with records, atypical cases).
  • Implement audit logs that document inputs and decisions for later investigation in line with regulation.

Prioritize traceability: input/output logs, model versioning and decisions made by people.

Phase 5 — Training and change management

Sustainable adoption depends on training and expectation management:

  • Run practical initial training (not just concepts): interpreting AI outputs, using dashboards, escalation protocols.
  • Create communities of practice across units deploying AI.
  • Communicate transparently to citizens about AI use in public administration and the complaint channels (transparency obligations and GDPR data subject rights).
  • Define incentives and professional evaluation criteria that recognise supervision and continuous improvement of AI.

Procurement and relationship with vendors

When contracting AI solutions, require:

  • Clauses on data ownership and access, portability and deletion.
  • Commitments on maintenance, updates and vulnerability management in line with ENS.
  • Technical documentation that enables independent audits (requirements under the EU AI Act).
  • SLAs for explainability and response times for human support.

A public tender should incorporate these requirements into the award criteria (Law 9/2017).

Takeaway — Immediate action in 90 days

Recommended actions within 90 days for any local authority:

  1. Select 3 candidate processes and map tasks with a risk/impact matrix.
  2. Carry out a preliminary DPIA and ENS classification of the data involved.
  3. Design a pilot using an “AI-augmented” pattern (human-in-the-loop), defining metrics and human review thresholds.
  4. Prepare basic contractual clauses for future purchases (data, audit, maintenance).

Implementing these four actions lets you move quickly with legal and operational control. If you need support to assess maturity and design the pilot, OptimTech offers a practical evaluation as part of its OptimGov Ready service, which can help prioritise processes and contractual requirements.

Conclusion: the goal is not to replace the public function, but to enhance decision-making and operational capacity while maintaining the accountability, transparency and security required of the public sector.