Saltar al contenido principal
Back to blog
DesignPublic Services

Citizen-Centered AI Design for Public Services

April 1, 20265 min readOptimTech
Share:

Why citizen-centered design matters for public AI

AI projects in the public sector often start from internal goals (cost savings, efficiency). However, if the design doesn’t prioritize citizens’ experience and rights, problems arise: vulnerable groups get excluded, decisions made by automation are not understood, complaints increase, and trust is lost. Designing with citizens at the center reduces legal risk (GDPR, EU AI Act) and improves the real effectiveness of the service.

Below is a practical process, applicable to municipalities and public bodies, to design AI services that are useful, accessible, and legally robust.

A 5-phase process with concrete actions

1) Define citizen outcomes and map users

  • Goal: prioritize what citizens need, not what the administration wants to optimize.
  • Actions:
    • Identify 3–5 measurable user outcomes (e.g., time to complete an application, rate of understanding of the response, rate of avoided in-person visits).
    • Build 4–6 representative user personas, including vulnerable profiles (older adults, people who aren't digitally native, those with hearing/visual impairments).
    • Map current channels (web, phone, in-person) and friction points.

2) Co-design and prototype with real users

  • Goal: validate assumptions before investing in complex models.
  • Actions:
    • Hold 2–3 co-design workshops with citizens and frontline municipal staff.
    • Prototype flows on paper or with low-code tools; test in a real scenario with 10–20 users per iteration.
    • Collect qualitative feedback on language clarity, output formats, and need for human intervention.

3) Privacy, security and compliance by design

  • Goal: embed GDPR, the EU AI Act and the Spanish National Security Scheme (ENS) from the start.
  • Actions:
    • Apply data minimization: only collect what is essential for the defined outcome.
    • Determine the legal basis (e.g., public interest, compliance with a legal obligation) and document it.
    • If processing is high-risk, plan a Data Protection Impact Assessment (DPIA).
    • Require pseudonymization where possible and define clear retention periods.
    • Incorporate controls and settings that allow auditing of decisions (logging of inputs/outputs).

4) Transparency, explainability and user options

  • Goal: ensure citizens understand when and how AI is involved.
  • Actions:
    • Create a brief label to accompany interactions: who runs the system, purpose, user rights and how to request a human review.
    • Show, when appropriate, confidence indicators or sources (e.g., “Response based on regulation X; estimated accuracy: 82%”).
    • Guarantee clear appeal routes and frictionless access to human support (maximum response times).
    • Design outputs in plain language and in the languages relevant to the municipality.

Example short text for users:

  • “This service uses AI to help complete your application. Minimal data used: name, ID number and type of procedure. If you disagree with a decision, you can request a human review by calling 012 or at the municipal registry.”

5) Testing, metrics and continuous improvement

  • Goal: measure impact on citizens and detect bias or operational failures.
  • Recommended metrics:
    • Task completion rate by channel and by persona.
    • Comprehension rate (quick post-interaction survey: was the response clear to you?).
    • Relevant operational error rate (decisions requiring human intervention).
    • Average time to resolve complaints.
    • Equity indicators (compare outcomes across demographic groups).
  • Actions:
    • Pilot in a controlled area (e.g., automated guidance for municipal benefits or form pre-filling) for 8–12 weeks.
    • Collect logs and evidence for the technical documentation required by the EU AI Act and internal audits.
    • Set up a citizen feedback inbox and review iteratively.

Requirements in contracts and project governance

  • Include co-design and user testing clauses in the tender documents.
  • Establish KPIs focused on citizens (not just cost savings).
  • Request technical documentation, explanations of automated decisions and the ability to export data for audit.
  • Define human fallback clauses and service levels for in-person/telephone support.

Practical considerations and risks to avoid

  • Don’t launch without testing with vulnerable groups: a design that “works” for a digital majority can exclude others.
  • Don’t use technical language in the public interface: prioritize clarity.
  • Don’t outsource all governance: keep municipal capacity to supervise and audit (logging, DPIA, evaluations).
  • Maintain alternative channels (in-person/phone) for those who don’t use the digital route.

Action template: 90-day sprint

  • Days 0–15: define citizen outcomes and personas.
  • Days 16–35: co-design and low-fi prototypes.
  • Days 36–60: develop the minimum viable pilot with privacy controls.
  • Days 61–80: user testing and basic metrics.
  • Days 81–90: review, regulatory documentation (DPIA if applicable) and scaling plan.

Takeaway / Recommended action: launch a 90-day sprint with citizen-centered objectives (define 2 user KPIs) and test with at least one vulnerable group; document the DPIA and the appeals route from day one. A practical, citizen-focused approach reduces legal risks and improves real AI adoption in your municipality.

(For teams that need support in the diagnostic phase and in defining citizen KPIs, tools like OptimGov can be integrated into co-design processes without compromising data sovereignty.)