The governance problem is not whether AI is used. It is whether the firm can prove that it knows where AI is used, what decisions it influences, what evidence exists, and who is accountable.
Full advisory paper — 7 sections, 30+ pages. Free. No registration required.
Download PDF Discuss with Declan or Austin →PDF · 484 KB · April 2026
Executive Summary
Artificial intelligence is already present in most established regulated firms. In the majority of cases it arrived before governance did.
This advisory paper makes a single, practical argument: the primary challenge for established regulated organisations is not AI ambition — it is the ability to demonstrate, on demand and to a demanding audience, that the organisation knows where AI is used, what risk it carries, who is accountable, what decisions have been made, and what evidence exists.
That standard — defensible control — is not a future regulatory aspiration. It is the current expectation embedded in the EU AI Act, in FCA operational resilience and model risk guidance, in ISO 42001, and in the board-level accountability frameworks that firms already operate under.
The paper sets out the five failure modes that leave established regulated firms exposed, defines what defensible control requires in practice, and describes the operating model — Sentinel and Citadel — that gets you there.
"The difference between saying 'we have guidelines' and showing a regulator an operating trail is the difference between a governance statement and a governance fact."
— EAIC Advisory Paper, Section 3Referenced frameworks
Section 2
Established regulated firms face the same external obligations as large enterprises but carry far less organisational surplus. These five failure modes are the result.
The firm cannot state with confidence which systems, teams, or third-party vendors are using AI — particularly where AI capability is embedded in existing software.
No consistent mechanism to distinguish low-friction productivity assistance from higher-stakes decision support, autonomous process execution, or agentic automation.
Approvals, evaluations, exception records, and control attestations exist in scattered form — across inboxes, shared drives, and presentation decks — or are not captured at all.
Leadership cannot reconstruct who approved what, under which assumptions, with what conditions, or when that approval expires or requires renewal.
Incidents, near-misses, model drift events, policy breaches, and vendor risk events lack a coherent governance route — they surface ad hoc, are resolved informally, and leave no durable record.
What the paper argues
The paper is built around a coherent argument rather than a framework survey.
Most established regulated firms already have more AI than their governance acknowledges. The primary challenge is not deploying more — it is gaining visibility and defensible control over what is already present.
The same regulatory obligations that apply to large enterprises apply to established regulated firms. The operating model has to be proportionate — executable without large specialist teams, and designed to survive scrutiny.
Governance programmes fail most commonly when they end with a report. The EAIC model is built around a different close condition: the engagement closes when Citadel is live, the estate is populated, and the governance cadence has started.
A governance platform earns authority through the depth of its governance mechanics — not the breadth of its feature set. The paper covers Citadel's system registry, risk engine, decision ledger, evidence model, and board reporting in detail.
Download the full paper
Seven sections. The five failure modes. The defensible control framework. The Citadel technical foundation. The Sentinel activation model. Free — no registration required.
PDF · 484 KB · April 2026 · EAIC Ltd · eaic.uk