Judgement & Governance

This lane is for moments where judgement, responsibility, and long-term consequences matter more than speed or scale.

You may already have risk registers, dashboards, and compliance papers. What is often missing is a clear view of where checking, interpretive labour, and unspoken risk are accumulating inside the system. Cernavia produces bounded judgement artefacts that surface what current metrics and frameworks miss, so senior decision-makers can act with their eyes open.

Each artefact is written for a specific decision moment.

What I deliver

Core artefact

Verification-Load Diagnostic

When AI systems are introduced, someone still has to decide what can be trusted.

Core question

Where is human checking and interpretive labour accumulating, and at what cost?

What it does

Verification-Load Diagnostics map who is reading, checking, and correcting AI outputs, how that load is distributed, and where it becomes unsafe or unsustainable.

What you receive

  • a diagnostic memo of around 15 to 25 pages, highlighting key verification points, choke spots, and overload risks
  • one or two visual maps showing how verification load moves through your system
  • a short Implications for governance and operations section

Core artefact

Shadow-Risk Brief

Standard risk tools tend to foreground what is easiest to quantify.

Core question

What risks fall outside current frameworks but still matter for legitimacy, care, and trust?

What it does

Shadow-Risk Briefs surface relational, workload, psychosocial, and legitimacy risks that do not sit neatly in existing registers, but matter deeply for trust and institutional integrity.

What you receive

  • a brief of around 20 to 40 pages, integrating research, field accounts, and your existing governance or risk documents
  • clearly framed scenarios and red flags, for example, “if you see X, treat it as an escalation point”
  • an optional board or committee summary

Core artefact

Governance Maturity & Judgement Note

Many maturity models score policies on paper.

Core question

How are decisions and responsibilities actually working in practice, and what is realistic to change now?

What it does

These notes focus on how governance, ethics, and policy are actually lived in the parts of your system touched by AI or other complex tools.

What you receive

  • a memo of around 15 to 20 pages
  • a clear arc such as Ad hoc → Diffuse → Designed → Integrated
  • strengths to protect and a limited number of actionable gaps that matter for decisions and risk
  • one or two simple visuals you can reuse in internal discussions

The goal is a clear internal picture you can act from.

When this work is useful

Signals this may be the right lane

  • AI-enabled tools are being funded, rolled out, or regulated in education, neurodivergence, social, or child-rights contexts, and you suspect risk is migrating beneath the surface.
  • Staff, teachers, or families are quietly absorbing extra checking and emotional load, while official documents still speak in the language of “efficiency”.
  • Boards, audit and risk committees, or steering groups are asking for a deeper view of risk, responsibility, and readiness than standard compliance checklists can provide.
  • You sense that important dimensions, including relational, pedagogical, and systemic concerns, fall outside dominant frames, even in highly expert environments.

How this complements existing risk work

These artefacts are designed to sit alongside legal, technical, and compliance assessments.

  • Surface what is being missed, including hidden assumptions, second-order effects, invisible labour, and long-tail harms
  • Stay decision-facing, written for specific governance moments such as a board, committee, or portfolio decision
  • Start field-up, moving from lived contexts such as teachers, neurodivergent families, frontline workers, communities, or annotators up to frameworks
  • Remain honest about limits, separating what is known, assumed, and uncertain
  • Are finite by design, with scoped, time-bound, asynchronous engagements that conclude cleanly

Who commissions this lane

  • Governance, risk, digital, or ethics leads in education, neurodivergence, social, or child-rights portfolios
  • Programme or portfolio directors funding AI-enabled or data-intensive tools
  • Foundations and multilaterals needing a deeper view of risk in a specific initiative or across a portfolio
  • AI and EdTech teams wanting an independent, field-up view of verification load and shadow risk before scaling

If you need to look a board, committee, or community in the eye and say, “This is what we know, this is what we do not know, and this is what we are doing about it,” this work is designed for you.

How it works

Indicative rhythm

Timeframes

  • Verification-Load Diagnostic: usually 4 to 8 weeks
  • Shadow-Risk Brief: usually 8 to 12+ weeks
  • Governance Maturity & Judgement Note: usually 4 to 8 weeks

Inputs from you

  • relevant documents such as policies, risk registers, evaluation outputs, and usage data
  • 3 to 8 targeted conversations with key people across governance, product, frontline roles, and neurodivergence or child-rights perspectives
  • clarity on the decision or governance moment this artefact should inform

Engagement format

  • fixed scope and deliverables agreed up front
  • remote, desk-based, and primarily asynchronous
  • one or two structured feedback cycles
  • confidential handling of all materials

Starting a conversation

When the concern is still difficult to name

Most governance work begins with a concern that is difficult to name clearly.

If you suspect verification load or shadow risks are building beneath current dashboards, or need an independent artefact to brief a board, committee, or funder before a key decision, a short note is enough to begin.

A paragraph on your context, timelines, and what you are trying to decide is enough to start.