AI Governance. Built it. Written it. Defending it.
AI Governance Gap – Briefing
The AI Governance Gap Brief
19 Issues. One gap per issue. Written by Patrick Upmann.
PU
Patrick Upmann
Publisher · Architect of Systemic AI Governance · Founder AIGN
The AI Governance Gap Brief is my monthly newsletter on LinkedIn. Each issue identifies one structural gap between how organisations deploy AI and how governance, accountability, and regulation actually work — written for boards, senior leaders, and compliance teams. No framework theory. No generic advice. One gap per issue.
2,600+ global readers — regulators, researchers, board members, corporate leaders
Your chatbot is live. It answers customer queries, handles complaints, supports HR decisions. But who reviewed the approval path? Who documented the oversight mechanism? Who can reconstruct the decision if it is challenged? Most organisations cannot answer these questions — not because they chose not to, but because governance was not built into the deployment decision. This issue maps the structural exposure that chatbot deployment creates for organisations under the EU AI Act and DORA.
Three blocks cover the full arc of the Brief — from where governance fails at the technology level, to how organisations create the conditions for failure, to the regulatory and liability exposure that results.
Block 1 — Technology & AI Systems6 Issues
Where specific AI technologies — chatbots, agentic systems, enterprise Copilots, shadow AI, algorithmic CV screening — create structural governance exposure that boards cannot see until after the incident.
Where AI governance fails inside organisations — through culture, time pressure, budget decisions, procurement gaps, human factors, and structural misalignment between intent and execution.
Where specific regulatory frameworks — EU AI Act, DORA, data protection law — meet real deployment decisions, and why accountability and liability exposure is consistently higher than boards expect.