Patrick Upmann – EN

Patrick Upmann — AI Governance at Board Level
AI Governance at Board Level

When Governance becomes personal.

Patrick Upmann is engaged when AI governance can no longer be delegated — when decisions must be signed, documented and defended: under scrutiny, regulatory pressure, incidents, or personal liability exposure.

Request a Board Mandate Confidential  ·  Board level only
Core Principle
If governance has to be reconstructed after the fact, it never existed.
— Patrick Upmann
Boards engage when
An audit or regulatory review has been announced
An AI-related incident has occurred or is escalating
Personal liability exposure becomes visible
Governance decisions are blocked across functions
7
Layer governance architecture
connecting regulation with trust
SSRN
Published research on
AI governance architecture
4
Mandate types — from stress test
to interim decision authority
EU · ISO · NIST
Frameworks translated into
defensible governance evidence
When Boards Engage

Four situations.
One standard: defensibility.

01 — Audit & Regulatory Pressure

The governance would not survive a formal audit.

Audits are announced, expanded, or escalated. Regulators demand evidence or named accountability for AI-related decisions.

  • Decision authority exists only implicitly
  • Accountability is distributed across functions
  • Documentation does not hold up under scrutiny
  • Governance relies on informal coordination
02 — Incident Escalation

Accountability becomes personal while the evidence record is incomplete.

AI-related incidents have occurred, are under investigation, or are set to escalate beyond the operational level.

  • Decisions were made but are not defensible
  • Escalation paths depend on individuals, not structures
  • Governance must be reconstructed after the fact
  • Evidence standards collapse under pressure
03 — Personal Liability Exposure

Responsibility is shared but not owned.

Board members, executives, or designated function holders face potential personal liability in connection with AI systems, decisions, or oversight obligations.

  • Unclear boundaries of decision authority
  • Evidence does not meet legal or regulatory standards
  • Liability arises without defensible governance structures
04 — Governance Deadlock

Governance exists but cannot be enforced.

AI governance decisions are blocked, politicised, or continuously deferred between legal, IT, compliance, risk, and business functions.

  • No recognised decision authority across functions
  • Committees without mandate or escalation authority
  • Time pressure increases while accountability remains unresolved
Mandate Types

Three mandates.
One outcome: governance that holds.

Early Exposure

Board AI Governance Stress Test

Determines whether current AI governance would survive an audit, regulatory inquiry, incident, or judicial review — today, not in principle.

Addresses Unclear decision authority, distributed accountability, missing or inadmissible evidence.
Outcome Board-ready assessment, exposure map, minimum changes required to achieve defensibility.
2 – 4 weeks
Escalating Pressure

Preventive Accountability & Evidence Analysis

Builds decision authority and robust evidence before legal counsel, auditors, or regulators demand it under pressure.

Addresses Indefensible decisions, evidence requiring reconstruction, weak escalation paths.
Outcome Decision boundaries, approval design, documentation standards that withstand scrutiny.
2 – 6 weeks
Active Escalation

Interim AI Governance Decision Lead

Time-limited interim authority when governance decisions cannot wait and must be stabilised at board level immediately.

Addresses Blocked decisions, active regulatory pressure, leadership gaps in accountability.
Outcome Stabilised authority, board-grade defensibility, clean handover to accountable owners.
4 – 12 weeks
AIGN OS

A governance operating system
with seven layers.

01

Leadership & Accountability

Governance begins where named authority and accepted accountability begin — not where policy documents end.

02

Culture & AI Literacy

Education as governance infrastructure, not optional awareness. A literate organisation makes fewer ungoverned decisions.

03

Use Case & Risk Governance

AI usage must be known, classified, and risk-assessed before exposure becomes real. Unknown deployment is uncontrolled liability.

04

Decision Oversight

Who decides, on what basis, with which escalation path, and with what evidence. The foundation of defensible governance.

05

Controls & Safeguards

Governance must be operationally enforceable — not merely declared on paper or demonstrated in workshops.

06

Deployment & Monitoring

AI deployment without continuous governance creates uncontrolled, accumulating liability — invisible until an audit.

07

Evidence & Trust Infrastructure

Certification, licensing logic, and trust become visible, measurable market signals — not internal assertions.

Regulatory Alignment
EU AI Act
ISO/IEC 42001
NIS2
DORA
GDPR
NIST AI RMF
The Board Question

Where does our governance actually hold, where are we exposed, and what does preparation mean under real scrutiny?

Publications — SSRN

Academic foundations.
Public standard-setting.

2025

AIGN OS 2.0 — The Operating System for Responsible AI Governance

A certifiable governance architecture aligned with Europe’s integrated regulatory framework — translating regulation into systemic governance design.

2025

AIGN OS — AI Agents: The AI Governance Stack

Reframes agentic AI governance as regulatory infrastructure, addressing attribution, liability, and system control for autonomous AI systems.

2025

AIGN Systemic AI Governance Stress Test

A stress test methodology for governance resilience — from abstract principles to measurable safeguards under real pressure conditions.

2025

AIGN OS — Trust Infrastructure

Defines certification, licensing, and market enforcement as the missing enforcement layer that converts AI governance into measurable trust.

2025

The ASGR Index

The first global benchmark for systemic AI governance readiness — across policy alignment, technical governance, organisational maturity, and trust assurance.

2025–2026

From Law to Architecture

Extends AIGN OS to cover legal-operational design, procurement governance, synthetic knowledge augmentation, and geopolitical AI governance infrastructure.

Request a Mandate

When governance can no longer
be delegated.

When AI governance decisions in your organisation become personally exposed, time-critical, or legally sensitive — a mandate can be initiated directly at board or executive level.

Confidential  ·  Board level only  ·  No intermediaries

  • Boards — restore explicit decision authority that survives formal scrutiny.
  • Executives — define accountability boundaries that hold under audit.
  • Legal & Risk — establish defensible governance evidence before it is requested.
  • Organisations — move from AI exposure to structured governance capability.
  • Next step — request a confidential board mandate conversation.