Chatbot – AI Governance Exposure

now.digital · AI Governance · Patrick Upmann
Patrick Upmann · AI Governance · Board Level

Your chatbot
answers.
Who governs it?

The EU AI Act is in force. Certain AI systems used in credit, insurance, and employment contexts may qualify as high-risk, triggering significant compliance obligations. Boards and senior management face governance and supervisory exposure if AI use is not properly structured. I help leadership teams build defensible AI governance — before enforcement begins.

Feb
2025
Prohibited AI practices — Art. 5
Social scoring, subliminal manipulation, biometric mass surveillance — banned and enforceable now
In Force
Aug
2025
GPAI model obligations + AI literacy
Providers of general-purpose AI models must comply; providers and deployers must ensure AI literacy — Art. 4, Art. 53
In Force
Aug
2026
Stand-alone high-risk AI obligations
Credit, insurance, employment AI systems in scope of Annex III must comply — subject to Digital Simplification proposal
Operative Deadline
Aug
2027
Product-embedded AI systems
AI embedded in regulated products (medical devices, machinery) — extended transition period under AI Act
Extended
Legal and editorial note (April 2026): This website provides general information about the EU AI Act (Regulation (EU) 2024/1689) and related regulatory frameworks for orientation purposes only. Nothing on this page constitutes legal advice, regulatory assessment, or a compliance opinion. All statements concerning legal obligations, risk classifications, and regulatory timelines reflect publicly available information as of April 2026 and may be subject to change, particularly pending the finalisation of the Commission’s November 2025 AI Act simplification proposal (COM(2025)836). Penalty figures refer to statutory maximum ranges under Art. 99 EU AI Act. Whether any particular AI system or activity falls within the scope of these provisions depends on the specific facts and context and must be assessed by qualified legal counsel. Sources include: European Commission, EBA, BaFin, EIOPA, K&L Gates, Bird & Bird, Deloitte, Kiteworks.
The Structural Gap

Governance does not keep pace
with AI-driven decisions.

The figures below are drawn from the EU AI Act statutory text, DORA, and published research. They are provided for orientation, not as legal benchmarks applicable to any specific organisation.

All regulatory figures refer to statutory maximum ranges. Actual enforcement depends on national supervisory authority, facts of case, and mitigating factors. Survey data (Kiteworks) reflects self-reported figures from a private study population and should be read as indicative, not as authoritative regulatory data.

€35M
Statutory maximum fine for prohibited AI practices — or 7% of global annual turnover, whichever is higher
Art. 99(3) EU AI Act (Reg. (EU) 2024/1689) — applies to violations of Art. 5 from Feb 2025
€15M
Statutory maximum fine for non-compliance by providers and deployers with high-risk AI obligations — or 3% of global turnover
Art. 99(4) EU AI Act — applies from 2 August 2026 for stand-alone Annex III systems
78%
of surveyed organisations cannot validate data before it enters AI training pipelines
Kiteworks 2026 Data Security & Compliance Risk Forecast Report (private survey data — indicative)
53%
of surveyed organisations have no mechanism to recover or remove training data after an incident
Kiteworks 2026 Data Security & Compliance Risk Forecast Report (private survey data — indicative)
EU AI Act — Accurate Timeline

What applies now.
What comes next.

Note on the Commission’s November 2025 AI Act simplification proposal (COM(2025)836): The Commission has proposed adjustments to the timing for the application of high-risk AI rules, described as an adaptation of up to a maximum of 16 months. This proposal is currently under the ordinary legislative procedure and has not been adopted. The operative statutory deadline under Regulation (EU) 2024/1689 remains 2 August 2026 for stand-alone high-risk AI systems until the proposal is formally adopted. Organisations should plan against the operative statutory deadline and monitor the legislative process.
2 Feb 2025
In Force
Prohibited AI Practices — Art. 5 EU AI Act
Enforceable ban on: AI systems for government social scoring, subliminal or manipulative techniques affecting behaviour, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), AI exploiting vulnerabilities of specific groups, and — from August 2025 — emotion inference in workplace and educational settings. Violations subject to the highest penalty tier.
⚠ Up to €35M or 7% of global annual turnover — Art. 99(3)
All SectorsEnforceable Now
2 Aug 2025
In Force
GPAI Model Obligations + AI Literacy — Art. 4, Art. 53–55
Providers of general-purpose AI models (including large language models) must comply with transparency, copyright summary, and documentation obligations. Models with systemic risk face additional requirements. Art. 4 requires providers and deployers — not all users — to ensure appropriate AI literacy for relevant personnel. Organisations that only deploy a third-party model are not automatically GPAI providers under the Act.
⚠ Up to €15M or 3% of global annual turnover — Art. 99(4)
GPAI ProvidersProviders & Deployers
2 Aug 2026
Operative
Stand-alone High-Risk AI Systems — Annex III
High-risk AI obligations apply to stand-alone systems listed in Annex III, including: creditworthiness assessment of natural persons; life and health insurance risk assessment and pricing in relation to natural persons; AI used in recruitment, selection, promotion, task allocation, or termination; biometric categorisation in defined contexts. Obligations include: conformity assessment, risk management system, data governance measures, technical documentation, human oversight mechanisms, transparency toward users, EU database registration. Whether a specific system qualifies depends on its intended purpose and deployment context — classification is not automatic.
⚠ Up to €15M or 3% of global annual turnover — Art. 99(4)
BankingLife & Health InsuranceHR SystemsBiometrics
2 Aug 2027
Extended
High-Risk AI Embedded in Regulated Products
AI systems embedded in products already subject to EU product safety legislation (medical devices, machinery, vehicles, toys) have an extended compliance period to 2 August 2027. Large-scale IT systems listed in Annex X: compliance deadline 31 December 2030. If the Commission’s simplification proposal (COM(2025)836) is adopted, the stand-alone high-risk deadline may be extended by up to 16 months — pending legislative adoption.
Medical DevicesMachineryAnnex X IT Systems
Regulated Sectors

Banks. Insurers.
Asset managers.
Governance exposure is real.

Certain AI systems deployed in financial services may qualify as high-risk under Annex III of the EU AI Act, and financial institutions are simultaneously subject to DORA. The interaction of both regimes requires coordinated governance — not sequential compliance.

Banking & Payments
AI systems used to assess creditworthiness of natural persons are explicitly listed in Annex III(5)(b) of the EU AI Act as high-risk. DORA has been in force since January 2025, covering banks and payment institutions. BaFin published guidance in December 2025 clarifying that AI-related risks must be integrated into existing ICT risk governance frameworks under DORA. The EBA confirmed in its November 2025 factsheet that AI Act and banking regulations are complementary and must be read together.
Annex III(5)(b): creditworthiness assessment of natural persons = high-risk
DORA (Reg. 2022/2554): in force for banks since 17 January 2025
BaFin guidance (Dec 2025): AI risks to be integrated into ICT risk management
EBA factsheet (Nov 2025): AI Act is complementary to banking regulation
Life & Health Insurance
Annex III(5)(b) of the EU AI Act explicitly covers AI used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. This is a precise scope — not all insurance AI is covered, and the classification depends on the intended purpose and deployment context. DORA applies to insurers and reinsurers as of January 2025, with EIOPA and national insurance supervisory authorities as relevant oversight bodies.
Annex III(5)(b): life and health insurance risk assessment & pricing = high-risk (natural persons)
DORA: applies to insurers since 17 January 2025 — EIOPA oversight
Art. 13 EU AI Act: transparency obligations toward affected persons
High-risk classification depends on intended purpose — not product label alone
Asset Management & Investment
Investment firms are subject to DORA since January 2025 and must assess each AI system’s classification under the EU AI Act. ESMA has issued guidance on AI use in investment services, and MiFID II suitability and conduct obligations apply to AI-supported investment recommendations. Boards using AI in material investment decisions carry governance obligations that cannot be delegated — the AI Act does not eliminate accountability by inserting a system between the decision and the decision-maker.
DORA: investment firms in scope since January 2025
ESMA guidance: AI use in investment services subject to conduct obligations
MiFID II: suitability and organisational requirements apply alongside AI Act
Board accountability: governance obligations non-delegable
Family Businesses & SMEs
The EU AI Act applies based on role (provider, deployer) and the AI system’s intended purpose — not company size. SMEs benefit from a penalty cap (the lower percentage applies) and from simplified conformity procedures for certain systems, but governance obligations — risk management, documentation, human oversight — apply without size-based exemption. AI used in HR decisions or customer interactions may trigger deployer obligations under Annex III, depending on the specific use case.
Art. 99(7): SME penalty cap — lower applicable percentage applies
Governance obligations (risk management, oversight) apply regardless of size
HR and customer AI: deployer obligations may apply — context-dependent
Classification must be assessed per use case — not assumed from product description
Services

Three ways toward
a defensible position.

01
Board AI Clearance™
An independent governance and defensibility review of the approval path — before your organisation approves, scales, or tolerates AI in a business-critical or regulated context. Not legal advice. A structured assessment of accountability, traceability, and intervention capability at leadership level.
  • Identifies where governance accountability actually sits
  • Assesses whether decisions can be assigned, traced, and challenged
  • Maps structural exposure before board-level escalation
  • Produces a written report for executive and board use
  • Does not replace legal counsel — complements it
02
Chatbot Governance Assessment
Structured review of your AI chatbot systems against the EU AI Act, DORA, and sector-specific regulation. Identifies likely risk classification, governance gaps, and relevant obligations as provider or deployer. Results in a prioritised action plan — not a compliance certificate.
  • EU AI Act scope and risk classification analysis (Annex III)
  • DORA applicability and ICT risk integration check
  • Audit trail and traceability assessment
  • Human oversight mechanisms review
  • Prioritised remediation plan with deadline mapping
03
Board & Executive Keynotes
Board briefings, supervisory board workshops, and executive keynotes — focused on the Control–Liability Paradox, accountability under AI governance frameworks, and what the EU AI Act means for leadership in practice. Based on published research. No framework theory. Direct and board-ready.
  • Tailored to supervisory boards, C-suite, and owners
  • Based on the Control–Liability Paradox paper (2026)
  • English and German
  • In-house or conference format
How It Works

No frameworks. No generic audits.
Focused, contextual analysis.

Understand context
Brief initial conversation — the AI system, its intended purpose, your role (provider or deployer), the decision environment, and applicable regulatory context.
Assess exposure
Structured review against EU AI Act, DORA, and sector-specific regulation: accountability, traceability, oversight mechanisms, documentation readiness.
Name the gaps
Clear written report: where your governance path is weaker than the responsibility attached to it. Prioritised. Referenced. Actionable.
Strengthen the position
The basis for a more defensible approval — or an honest recommendation to address gaps before proceeding. Independent. Board-ready. No conflicts of interest.
About

For leadership that does not want
to discover governance gaps
after the fact.

My work begins where accountability becomes personal. Not at the system level. At the level of the person who approved it, who signed off on it, who was responsible for overseeing it.

I do not work with organisations looking for a compliance checkbox. I work with boards and executive teams who need to genuinely understand whether their AI governance is structurally sound — and whether the decisions attached to it can be defended if challenged.

Built the frameworks. Written the research. Defending the positions.

“AI governance began to matter when accountability crossed the threshold from systems to people. This threshold defines where my work begins.”

— Patrick Upmann
Supervisory Boards & Non-Executive Directors
Oversight responsibilities under corporate and supervisory law do not disappear when an AI system makes the recommendation. Governance clarity is a fiduciary matter.
Banks & Financial Institutions
Credit AI may qualify as high-risk under Annex III. DORA requires ICT risk integration. BaFin has published its expectations. The August 2026 deadline is the operative date.
Life & Health Insurers
Risk assessment and pricing AI for natural persons in life and health insurance is named in Annex III. DORA applies. EIOPA is the relevant supervisory framework.
Family Businesses & Owner-Managed Companies
No internal compliance infrastructure. AI in HR, customer service, or creditworthiness contexts. The same governance obligations apply — the legal exposure is real regardless of size.
Legal & Compliance Teams
Who need an independent external perspective to support board-level communication, accountability mapping, and defensibility documentation alongside qualified legal counsel.
Board AI Clearance™

Assess your governance
position before
enforcement does.

The EU AI Act is in force. The high-risk deadline is 2 August 2026 — the operative statutory date. Enforcement follows. The question is whether your governance structure is stronger than the exposure attached to it.

A Board AI Clearance review typically involves an initial conversation followed by a structured assessment and written report. It does not constitute legal advice and does not replace qualified legal counsel in your jurisdiction.

Request a conversation
No generic intake forms. A direct conversation about your specific context. Response within 48 hours.