Advisory Paper Thought Leadership April 2026

AI Governance for
Established Regulated Firms.
From Hidden Use to Defensible Control.

The governance problem is not whether AI is used. It is whether the firm can prove that it knows where AI is used, what decisions it influences, what evidence exists, and who is accountable.

Boards & executive teams Risk & compliance leaders Operational owners Regulated financial services

Download the paper

Full advisory paper — 7 sections, 30+ pages. Free. No registration required.

Download PDF Discuss with Declan or Austin →

PDF · 484 KB · April 2026

Executive Summary

The primary challenge is not AI ambition. It is control.

Artificial intelligence is already present in most established regulated firms. In the majority of cases it arrived before governance did.

This advisory paper makes a single, practical argument: the primary challenge for established regulated organisations is not AI ambition — it is the ability to demonstrate, on demand and to a demanding audience, that the organisation knows where AI is used, what risk it carries, who is accountable, what decisions have been made, and what evidence exists.

That standard — defensible control — is not a future regulatory aspiration. It is the current expectation embedded in the EU AI Act, in FCA operational resilience and model risk guidance, in ISO 42001, and in the board-level accountability frameworks that firms already operate under.

The paper sets out the five failure modes that leave established regulated firms exposed, defines what defensible control requires in practice, and describes the operating model — Sentinel and Citadel — that gets you there.

"The difference between saying 'we have guidelines' and showing a regulator an operating trail is the difference between a governance statement and a governance fact."

— EAIC Advisory Paper, Section 3

Contents

  1. The Hidden-Use Problem — and the regulatory context that makes it urgent
  2. Why Established Regulated Firms Are Disproportionately Exposed
  3. What Defensible Control Actually Means — across six control dimensions
  4. The EAIC Operating Model — Sentinel activates, Citadel governs
  5. Platform Credibility — the Citadel technical foundation
  6. Sentinel Activation — fixed fee, fixed scope, closed when live
  7. The EAIC Position

Referenced frameworks

  • EU Artificial Intelligence Act (Reg. 2024/1689)
  • FCA PS21/3 · DP5/22 · Model Risk Guidance
  • ISO/IEC 42001:2023 AI Management Systems
  • ICO AI & Data Protection Guidance
  • PRA SS1/23 Model Risk Management

Section 2

The five structural failure modes.

Established regulated firms face the same external obligations as large enterprises but carry far less organisational surplus. These five failure modes are the result.

1

No reliable inventory

The firm cannot state with confidence which systems, teams, or third-party vendors are using AI — particularly where AI capability is embedded in existing software.

2

No tiering logic

No consistent mechanism to distinguish low-friction productivity assistance from higher-stakes decision support, autonomous process execution, or agentic automation.

3

No evidential backbone

Approvals, evaluations, exception records, and control attestations exist in scattered form — across inboxes, shared drives, and presentation decks — or are not captured at all.

4

No decision trace

Leadership cannot reconstruct who approved what, under which assumptions, with what conditions, or when that approval expires or requires renewal.

5

No escalation path

Incidents, near-misses, model drift events, policy breaches, and vendor risk events lack a coherent governance route — they surface ad hoc, are resolved informally, and leave no durable record.

What the paper argues

Three convictions. One practical conclusion.

The paper is built around a coherent argument rather than a framework survey.

The right starting point is control, not ambition

Most established regulated firms already have more AI than their governance acknowledges. The primary challenge is not deploying more — it is gaining visibility and defensible control over what is already present.

Defensible governance is not a large-enterprise luxury

The same regulatory obligations that apply to large enterprises apply to established regulated firms. The operating model has to be proportionate — executable without large specialist teams, and designed to survive scrutiny.

The close condition is Citadel going live

Governance programmes fail most commonly when they end with a report. The EAIC model is built around a different close condition: the engagement closes when Citadel is live, the estate is populated, and the governance cadence has started.

Platform authority comes from governance mechanics

A governance platform earns authority through the depth of its governance mechanics — not the breadth of its feature set. The paper covers Citadel's system registry, risk engine, decision ledger, evidence model, and board reporting in detail.

Download the full paper

AI Governance for Established Regulated Firms.
From Hidden Use to Defensible Control.

Seven sections. The five failure modes. The defensible control framework. The Citadel technical foundation. The Sentinel activation model. Free — no registration required.

Download PDF — Free Discuss with the team →

PDF · 484 KB · April 2026 · EAIC Ltd · eaic.uk