Saltar al contenido principal
Back to blog
Change managementPublic AI

Change Management for AI Adoption in Public Administration

February 27, 20264 min readOptimTech
Share:

Adopting artificial intelligence solutions in public administration is not just a technical task: it’s an organizational project that requires managing people, processes, regulatory compliance and citizen expectations. This piece offers a practical, step-by-step plan aimed at IT directors, digital transformation leads and municipal teams that need to introduce AI safely and with buy-in.

1. Governance and leadership: appoint a responsible owner with a clear mandate

  • Appoint an executive sponsor (senior political or technical leader) and an operational lead (CIO or department head).
  • Establish an AI Committee with representatives from services, legal, security (ENS), procurement and transparency.
  • Expected outcome: agile decisions on pilots, budget and risk criteria.

Example: a city council creates an AI Committee that approves a chatbot pilot for citizen information and defines escalation criteria.

2. Stakeholder mapping and early communication

  • Identify affected groups: administrative staff, unions, citizen services, legal and audit.
  • Design a communications plan: objectives, benefits, risks and channels (intranets, in-person sessions, FAQs).
  • Involve end users in defining requirements to reduce resistance.

Practical tip: hold "listening" sessions with 2–3 departments before the first pilot to collect real problems AI could help solve.

3. Low-risk pilots and an iterative approach

  • Prioritize use cases with low regulatory impact and clear metrics: document classification automation, duplicate request detection, chatbots for hours and simple procedures.
  • Recommended duration: 3–6 months per pilot with measurable objectives.
  • Keep human-in-the-loop and have rollback plans.

Concrete example: a pilot to classify registration files that reduces routing time by 30% (measurable goal), without making automated decisions on sensitive cases.

4. Role-based training and creating "champions"

  • Design training paths by role: operators (use and limitations), middle managers (change management) and technical staff (maintenance and monitoring).
  • Appoint “champions” in each unit to act as points of contact and internal trainers.
  • Include training on legal obligations: GDPR (impact assessments), ENS RD 311/2022 (security) and EU AI Act requirements (transparency and documentation for high-risk systems).

Practical session: a 2-hour briefing for managers and a 4-hour technical session for IT before deploying the pilot.

5. Process review and documentation redesign

  • Document current processes and identify repetitive tasks that can be automated.
  • Redesign workflows with AI in mind: clean inputs, human validation points and decision logs.
  • Update procedure manuals, templates and internal control rules to include AI use.

Tip: use simple process maps (as-is / to-be) and clearly mark audit points.

6. Data, security and compliance: integrate from the start

  • Apply good data governance principles: cataloging, quality, minimization and records of processing.
  • Before deployment, carry out a Data Protection Impact Assessment (DPIA) if applicable (GDPR).
  • Ensure security requirements in line with ENS RD 311/2022: access control, encryption, incident management and continuity.
  • Classify systems under the EU AI Act: determine whether the model qualifies as “high risk” and apply the corresponding obligations (conformity assessment, technical documentation, transparency).
  • Coordinate with public procurement (Public Sector Procurement Law - Law 9/2017) when acquiring third‑party solutions: include audit rights, continuity, data sovereignty and maintenance clauses.

Don’t forget to keep traceability (logs) of decisions and model versions for audits.

7. Measurement, monitoring and controlled scaling

  • Define KPIs from the start: processing time, error rate, rework, citizen satisfaction, compliance incidents.
  • Implement operational monitoring (misrouted cases, detected biases, performance degradation).
  • Scale according to criteria: legal compliance, reduction of incidents, internal acceptance and operational return.
  • Maintain regular reviews and retraining plans for models.

KPI example: reduce average citizen service response time from 48 to 24 hours in six months.

8. Support structure: a lightweight CoE and service level agreements

  • Set up a Center of Excellence (CoE) or technical core, even if small, to share patterns, libraries and lessons learned.
  • Define internal SLAs with user units: response time for incidents, update frequency and escalation owners.

Modular platforms like OptimGov can help governance and versioning in municipal environments, integrating compliance and control requirements.

Common risks and quick mitigations

  • Resistance: early communication and visible pilots.
  • Legal risk: DPIA and legal advice from the design phase.
  • Loss of technical control: clear governance policies and contractual clauses.
  • Biases and errors: human oversight, disaggregated metrics and tests with real data.

Recommended action (takeaway)

  1. Launch a low-risk pilot (3–6 months) with clear KPI objectives and an executive sponsor.
  2. Perform the DPIA and verify ENS/GDPR/EU AI Act requirements before production.
  3. Appoint champions for each unit, plan role-based training and establish a lightweight CoE to centralize lessons.

Adopting AI in the public sector is not an isolated technical race: it’s an organizational process that, if planned with governance, training and compliance, reduces risks and improves services for citizens.