AI systems – AI Governance

Patrick Upmann · AI Systems · Board Level

AI systems
produce decisions.
Who governs them?

AI governance no longer starts with one use case. It starts with the system categories that shape real business decisions: chatbots, scoring systems, fraud and AML engines, internal copilots, and agentic AI. Each system type creates a different governance burden. The core question is no longer whether AI is being used. The core question is whether the decisions shaped by these systems can still be explained, reconstructed, and defended.

5
Five system classes drive most current AI exposure
Chatbots, decision systems, fraud and AML systems, copilots, and agentic AI do not fail in the same way and cannot be governed identically.
Scope
Aug
2026
High-risk AI obligations become operational
Creditworthiness, insurance pricing, and employment-related systems may fall into Annex III scope depending on intended purpose and deployment context.
Deadline
€35M
Maximum penalty tier under the EU AI Act
The financial exposure is clear. The real leadership question is whether the organisation can defend how the system was governed.
Pressure
1
One missing owner can break defensibility
If no one owns the output, the model state, the controls, and the re-review trigger, no one can credibly defend the decision path.
Core Logic
Legal and editorial note (April 2026): This page provides general information about AI system governance, the EU AI Act, DORA, and related sectoral regulatory frameworks for orientation purposes only. It does not constitute legal advice, regulatory assessment, or certification. Whether a system is high-risk depends on intended purpose and deployment context and must be assessed case by case by qualified legal counsel.
The Structural Gap

Governance still treats AI
as one topic.
It is not.

Different AI systems generate different forms of exposure. Chatbots create conduct and representation risk. Decision systems create explainability and fairness pressure. Fraud and AML engines create triage and escalation pressure. Copilots create shadow usage and data leakage risk. Agentic systems create action without stable governance boundaries.

The numbers below combine statutory penalty ranges with governance logic relevant to real deployment environments. They are intended to frame decision pressure, not replace case-specific legal analysis.

€35M
Maximum fine tier under Art. 99 EU AI Act for the most serious categories of non-compliance.
Penalty logic under Regulation (EU) 2024/1689.
€15M
Maximum fine tier for non-compliance with high-risk AI obligations or 3% of global annual turnover.
Relevant for stand-alone Annex III systems from 2 August 2026.
5
Core AI system classes should be governed separately rather than grouped into one generic AI control model.
Practical governance logic for enterprise and regulated environments.
1
One missing control point can break the defensibility of the whole decision path: owner, data source, model change, override, or logging.
Board-level reconstruction logic.
System Map

Five AI system classes.
Five different governance burdens.

This page brings the full AI systems scope into one place. It is not enough to talk about „AI“ in the abstract. Governance must follow the architecture of the actual systems that shape decisions, customer outcomes, internal operations, and supervisory exposure.

01
Chatbots &
Conversational AI
Fastest-scaling interface layer. Direct interaction with customers, employees, and markets.
Risk focus: conduct, misinformation, defensibility
02
Decision Systems
Credit, scoring, underwriting, pricing, recommendation, and prioritisation logic.
Risk focus: explainability, fairness, customer effect
03
Fraud & AML Systems
Detection, alerting, triage, anomaly identification, and escalation support.
Risk focus: false positives, false negatives, escalation quality
04
AI Copilots &
Internal Assistants
Internal knowledge, compliance, legal, policy, and operations copilots.
Risk focus: shadow AI, data leakage, wrong internal reliance
05
Agentic AI Systems
Systems that do not only respond, but initiate, chain, escalate, and act.
Risk focus: autonomy, action boundaries, loss of control
AI Systems in Detail

This is where AI governance
actually becomes operational.

System 01
Chatbots & Conversational AI
Chatbots are the most visible AI systems in enterprise environments because they sit directly at the interaction layer. They answer questions, shape expectations, guide customers, and increasingly influence financial, operational, and service decisions in real time. Their risk is not just technical failure. Their risk is that organisations cannot later reconstruct what was said, based on which context, with which model state, under whose approval, and with which escalation path.
Primary Exposure
Misinformation, conduct risk, undocumented outputs, direct customer impact
Typical Use Cases
Customer service, onboarding, product information, HR helpdesks, internal support
Use cases
Retail banking chat interfaces
Insurance service assistants
Employee HR and compliance bots
What breaks
Promises are made, customer guidance is wrong, outputs are not logged, and organisations cannot defend a single interaction under pressure.
What governance must prove
Owner, prompt and context governance, output logging, escalation path, override logic, change review, and customer-facing control standards.
System 02
Decision Systems
These are the systems that sit closest to economically relevant outcomes. Credit scoring, underwriting support, recommendation logic, customer prioritisation, insurance pricing, and selection models can all shape who gets approved, what price is offered, which case is escalated, or how resources are allocated. This is where explainability, fairness, intended purpose, and human oversight become decisive. In regulated contexts, this is also where EU AI Act high-risk logic can become directly relevant.
Primary Exposure
Customer effect, explainability failure, fairness exposure, regulatory classification
Typical Use Cases
Credit, scoring, underwriting, pricing, recommendation, prioritisation
Use cases
Creditworthiness assessment
Insurance risk assessment and pricing
Recruitment and employment-related ranking
What breaks
A system shapes a material outcome, but the institution cannot show why the output occurred, whether the data basis was valid, or where human oversight actually intervened.
What governance must prove
Risk classification, traceability, human oversight, data governance, documentation readiness, and controlled re-review after changes.
System 03
Fraud & AML Systems
Fraud and AML systems often operate behind the scenes, but they can have major operational and financial effects. They decide what is flagged, what is prioritised, what is ignored, which cases become alerts, and which investigators spend time where. The central governance problem is not only model quality. It is whether the institution understands the escalation logic, can explain thresholds, knows how false positives and false negatives are handled, and can defend how the system shaped resource allocation and case handling.
Primary Exposure
False positives, false negatives, escalation bias, opaque prioritisation
Typical Use Cases
Fraud detection, AML triage, anomaly detection, alert ranking, case routing
Use cases
Transaction monitoring
Suspicious activity prioritisation
Case scoring and alert routing
What breaks
The institution trusts the alerting logic without being able to explain why cases were escalated, de-prioritised, or missed.
What governance must prove
Threshold control, escalation transparency, model monitoring, evidence quality, and link between system output and operational treatment.
System 04
AI Copilots & Internal Assistants
Internal assistants are often misclassified as low-risk because they do not directly face customers. That is a mistake. They shape internal advice, policy interpretation, document drafting, compliance support, legal research, and knowledge retrieval. They can spread wrong internal guidance at scale, leak sensitive information, and create dependency on outputs that were never validated. Their exposure lies in hidden adoption, shadow usage, and undocumented reliance inside critical functions.
Primary Exposure
Shadow AI, leakage, wrong internal decisions, undocumented employee reliance
Typical Use Cases
Compliance copilots, legal assistants, knowledge bots, operations support, document search
Use cases
Internal legal drafting support
Policy and procedure assistants
Risk and operations copilots
What breaks
Employees rely on outputs that appear authoritative, while the organisation has no real visibility into data flows, model provenance, or quality controls.
What governance must prove
Approved use boundaries, data handling rules, employee AI literacy, logging, monitoring, and documented responsibility for internal usage.
System 05
Agentic AI Systems
Agentic systems mark the shift from response to action. They do not only generate content or recommendations; they chain tasks, call tools, retrieve data, trigger workflows, and move across interfaces with increasing autonomy. This creates a new governance threshold. The question is no longer only whether the model output was correct. The question is whether the system remained inside an approved boundary, whether intervention points existed, and whether someone can still reconstruct the action chain across multiple tools and decision nodes.
Primary Exposure
Autonomous action, chain opacity, tool misuse, boundary failure
Typical Use Cases
Workflow orchestration, internal actions, research agents, approval preparation, multi-step automation
Use cases
Case preparation agents
Multi-step research and decision support
Workflow-triggering automation
What breaks
Action chains become too complex to reconstruct, tool permissions drift, and governance still assumes a passive assistant while the system already acts like an operator.
What governance must prove
Action boundaries, approval gates, tool permissions, traceable chain logs, intervention controls, and clear responsibility for system-initiated actions.
EU AI Act — Timeline

What applies now.
What becomes operational next.

Important timing logic: Organisations should plan against the operative statutory deadline of 2 August 2026 for stand-alone high-risk AI systems unless and until any later proposal is formally adopted. System design, classification, documentation, and oversight architecture should be built before enforcement pressure arrives, not after.
2 Feb 2025
In Force
Prohibited AI Practices
The first layer of the AI Act is already active. The immediate point is not only prohibition categories, but that governance expectations are no longer theoretical.
⚠ Highest penalty tier applies
All sectorsIn force
2 Aug 2025
In Force
GPAI Model Obligations + AI Literacy
This matters especially for organisations deploying large language model-based chatbots, assistants, and internal copilots. Literacy and governance are now expected as operational capabilities, not optional maturity topics.
LLM systemsProviders & deployers
2 Aug 2026
Operative
Stand-alone High-Risk AI Systems
This is the critical date for relevant decision systems in credit, insurance, employment, and similar contexts where intended purpose and deployment context pull the system into Annex III logic. Classification is not automatic, but the governance burden is real where material customer effects exist.
⚠ Up to €15M or 3% global annual turnover
CreditInsuranceEmployment
2 Aug 2027
Extended
Embedded High-Risk Systems
Embedded regulated products follow a later timeline, but this does not reduce the need for governance architecture today. Delay is not control.
Product-embedded AIExtended period
Regulated Sectors

The same system type
creates different exposure
in different industries.

A chatbot in retail, a chatbot in banking, and a chatbot in healthcare are not the same governance problem. This is why AI systems and industries must be read together.

Banking & Payments
Decision systems, fraud and AML engines, customer-facing chatbots, and internal copilots all intersect with strong control expectations, DORA pressure, and board-level scrutiny.
Credit-related AI may fall into high-risk scope
DORA intensifies ICT governance expectations
Fraud and AML systems shape operational case handling
Life & Health Insurance
Pricing and risk assessment systems can create direct regulatory relevance. Internal assistants and conversational systems create additional exposure where outputs affect customers or claims handling.
Risk assessment and pricing systems require sharper control
Customer communication systems amplify representation risk
DORA adds resilience and governance pressure
Asset Management & Investment
Recommendation logic, research assistants, internal copilots, and workflow agents can influence suitability, communication quality, governance records, and investment process integrity.
Decision support can still create board-level accountability
Agentic systems raise action-boundary questions
Copilots can shape material internal judgments
Family Businesses & SMEs
The governance challenge is often less about legal theory and more about hidden adoption: chatbots, internal assistants, and AI-driven decisions without ownership, logging, or clear boundaries.
Small size does not remove governance obligations
Shadow AI is often highest in internal assistant use
One uncontrolled system can still create major exposure
Services

Three ways to turn
AI systems into a defensible position.

01
Board AI Clearance™
An independent governance and defensibility review of AI systems before approval, scale, or tolerance in critical environments.
  • Maps structural governance exposure
  • Assesses ownership, controls, and reconstructability
  • Produces a board-level written view
02
AI Systems Governance Assessment
System-specific review of chatbots, decision systems, fraud and AML systems, copilots, and agentic AI against governance, oversight, and regulatory logic.
  • System classification and exposure review
  • Logging, oversight, and change-control analysis
  • Prioritised remediation logic
03
Executive & Board Briefings
Direct board-level briefings on AI systems, decision defensibility, regulatory pressure, and what different system classes mean in practice.
  • No generic framework slides
  • Clear system-by-system logic
  • English and German
How It Works

From system identification
to defensible governance.

Identify the system
Clarify the real system category, intended purpose, deployment context, interfaces, and business effect.
Assess exposure
Review ownership, traceability, logging, oversight, third-party dependencies, and likely regulatory relevance.
Name the gaps
Show precisely where the governance path is weaker than the exposure created by the system itself.
Strengthen position
Build the basis for a more defensible approval, operation, and re-review model under business and regulatory pressure.
Board AI Clearance™

Assess your AI systems
before enforcement,
audit, or failure does.

The real question is not whether your organisation uses AI. The real question is whether the systems now shaping decisions can still be defended under scrutiny.

First step: identify which of your systems already create exposure without a stable governance and evidence architecture behind them.

Request a conversation
A direct conversation about your AI systems, your governance position, and where exposure currently concentrates.