Industries – AI Governance

Patrick Upmann · AI Governance by Industry · Board Level

Every industry
uses AI differently.
Every industry
fails differently.

A pharma company’s AI governance challenge is not the same as a bank’s. An automotive OEM faces different exposure than an insurer or an e-commerce platform. Industry context shapes which systems are relevant, which regulations apply, what supervisory bodies expect, and where the real defensibility pressure sits. This page maps AI governance across twelve industries — sector by sector.

12
Industries covered — each with distinct AI governance logic
Finance, Pharma, Auto, Insurance, eCommerce, Healthcare, Energy, Logistics, Public Sector, Manufacturing, Telecom, Media.
Scope
Aug
2026
EU AI Act high-risk obligations become operative for key sectors
Credit, insurance, employment, biometrics, critical infrastructure — all sector-specific.
Deadline
€35M
Maximum fine tier — sector does not reduce exposure
The penalty logic applies regardless of industry. What differs is where the specific exposure concentrates.
Penalty
1
Core question — the same across all sectors
Can a material AI-influenced decision still be explained, reconstructed, and defended under scrutiny?
Logic
Note (April 2026): This page provides general AI governance orientation by industry sector. It does not constitute legal advice, regulatory certification, or a compliance assessment. Regulatory classification is always context-dependent and must be assessed by qualified legal counsel.
Cross-Industry Pressure

AI governance is not
a generic exercise.
Industry context is everything.

The same AI system type — a chatbot, a decision engine, a fraud filter — creates different regulatory exposure, different explainability pressure, and different board accountability depending on the sector it operates in. This is why generic AI governance frameworks fail: they ignore the sector-specific logic that determines what actually needs to be controlled.

12
Industry sectors with distinct AI governance profiles mapped on this page — from banking to logistics.
Scope: Finance, Pharma, Auto, Insurance, eCommerce, Healthcare, Energy, Logistics, Public Sector, Manufacturing, Telecom, Media.
€35M
Maximum EU AI Act fine tier applies across sectors. Industry does not reduce the penalty logic — it only changes where exposure concentrates.
Regulation (EU) 2024/1689, Art. 99.
Aug
2026
Key operative deadline for high-risk AI systems. Sector determines which systems are in scope — not the deadline itself.
Stand-alone systems under Annex III of the EU AI Act.
1
One governance question cuts across all sectors: can a relevant AI-influenced decision be explained, reconstructed, and defended under pressure?
Board-level defensibility logic.
Industry Profiles

Select your sector.
See the governance logic
that applies to you.

Each profile shows the relevant AI system types, primary governance exposure, regulatory context, and the core defensibility question for that industry.

🏦
Industry 01 · Financial Services
Banking & Payments
Banks operate the densest concentration of AI systems with direct economic and customer impact. Credit, AML, fraud, onboarding, customer service, and internal copilots all converge in one heavily supervised environment. The governance challenge is not deploying AI — it is proving every relevant decision can still be explained and defended.
Regulatory Pressure
EU AI Act · DORA · EBA Guidelines · CRR/CRD · GDPR · MiFID II
Supervisory Bodies
ECB · BaFin · FCA · National Competent Authorities
Risk Level
Critical
Key AI systems
Credit scoring & underwriting support
AML transaction monitoring
Fraud detection & case triage
Customer chatbots & onboarding AI
Internal compliance copilots
KYC automation & identity verification
Primary exposure
Credit decisions without explainability
AML systems with opaque escalation logic
Shadow AI in compliance functions
Vendor model changes without re-review
Customer impact from non-defensible scoring
DORA-relevant ICT AI dependency
What governance must prove
Owner, classification, intended purpose per system
Logging, audit trail, model change tracking
Human oversight at material decision points
Explainability for customer-relevant outputs
Third-party AI vendor control logic
Re-review trigger after model or data changes
EU AI Act Annex III DORA Art. 28–30 EBA AI Guidelines GDPR Art. 22 CRR Model Risk
Request banking AI review →
🛡
Industry 02 · Insurance
Insurance & Reinsurance
Insurance AI governance centres on pricing systems, risk assessment models, and claims handling automation — all of which can create direct customer impact and may fall into EU AI Act high-risk scope. The central question is whether pricing logic, underwriting support, and claims AI can be explained and defended when a customer or regulator challenges the outcome.
Regulatory Pressure
EU AI Act · DORA · Solvency II · GDPR · IDD · EIOPA Guidelines
Supervisory Bodies
EIOPA · BaFin · FCA · National Insurance Supervisors
Risk Level
Critical
Key AI systems
Pricing & risk assessment models
Underwriting decision support
Claims triage & fraud detection
Customer communication chatbots
Policy recommendation systems
Internal actuarial copilots
Primary exposure
Pricing AI affecting customer access to insurance
Claims denials influenced by opaque models
Risk assessment without auditable data trail
Customer communication producing wrong assurances
Fairness exposure in health/life underwriting AI
Vendor model drift in pricing engines
What governance must prove
Classification of pricing AI under Annex III logic
Explainability of pricing and risk decisions
Fairness monitoring for protected characteristics
Claims AI output linked to human review
Data provenance and model change tracking
Customer rights under GDPR Art. 22
EU AI Act Annex III Solvency II GDPR Art. 22 EIOPA AI Guidelines DORA
Request insurance AI review →
💊
Industry 03 · Pharmaceutical
Pharma & Life Sciences
Pharma AI governance intersects with GxP validation requirements, clinical data integrity, pharmacovigilance automation, and supply chain AI — all operating under the parallel pressure of the EU AI Act and sector-specific regulatory frameworks. The defining challenge: AI in clinical or drug safety contexts must be traceable to the level regulators expect in inspections.
Regulatory Pressure
EU AI Act · EMA Guidelines · GxP / CSV / CSVa · GDPR · ICH E6 · MDR
Supervisory Bodies
EMA · FDA · BfArM · National Competent Authorities
Risk Level
Critical
Key AI systems
Drug discovery & molecule screening AI
Clinical trial data analysis
Pharmacovigilance & signal detection
Regulatory submission assistance
Supply chain & demand forecasting AI
Internal scientific copilots & document AI
Primary exposure
AI-generated clinical insights without GxP validation
Pharmacovigilance AI with opaque signal logic
Regulatory submissions containing AI-assisted content
Scientific copilots spreading unvalidated conclusions
Missing audit trail for AI-touched clinical data
CSV/CSVa gaps in quality-critical AI systems
What governance must prove
GxP validation status of each AI system
Data integrity and audit trail in clinical AI
Human expert review of AI-generated outputs
Qualified oversight for pharmacovigilance AI
Change control for model updates in quality-critical systems
Documentation readiness for regulatory inspection
EU AI Act GxP / CSV / CSVa EMA AI Reflection Paper ICH E6(R3) GDPR
Request pharma AI review →
💊
Industry 04 · Healthcare & Medical Devices
Healthcare & Hospitals
Healthcare AI governance is defined by patient safety, clinical validity, and the overlapping pressure of the EU AI Act, the Medical Devices Regulation, and GDPR’s special category data rules. Diagnostic support AI, triage algorithms, and clinical decision tools require a governance layer that can survive scrutiny from medical regulators, data protection authorities, and courts.
Regulatory Pressure
EU AI Act · MDR 2017/745 · GDPR Art. 9 · EHDS · CE Marking
Supervisory Bodies
BfArM · ANSM · MHRA · Notified Bodies · Data Protection Authorities
Risk Level
Critical
Key AI systems
Diagnostic image analysis AI
Clinical decision support systems
Patient triage & risk scoring
Electronic health record (EHR) AI
Administrative & scheduling AI
Medical device software with AI components
Primary exposure
Diagnostic AI used without CE/MDR compliance
Clinical decisions shaped by non-validated models
Triage AI with opaque prioritisation logic
Health data processed by uncontrolled AI vendors
GDPR Art. 9 violations in AI-assisted profiling
Missing clinical validation records for AI tools
What governance must prove
MDR classification and CE marking status
Clinical validation for diagnostic AI
GDPR-compliant data flows for health data
Human clinician responsibility and override logic
EU AI Act Annex III classification check
Post-market surveillance for AI-driven medical tools
EU AI Act Annex III MDR 2017/745 GDPR Art. 9 EHDS ISO 14971
Request healthcare AI review →
🚗
Industry 05 · Automotive & Mobility
Automotive & Mobility
Automotive AI governance is expanding from embedded vehicle systems to supply chain optimisation, HR and recruitment AI, and connected vehicle data processing. The sector faces a dual pressure: product safety and type approval for in-vehicle AI, and the same enterprise AI governance obligations that apply to all large organisations. Product liability and AI governance now converge directly.
Regulatory Pressure
EU AI Act · UNECE WP.29 · GDPR · Product Liability Directive · Cyber Resilience Act
Supervisory Bodies
KBA · Type Approval Authorities · Data Protection Authorities
Risk Level
High
Key AI systems
ADAS & autonomous driving AI
Predictive maintenance & quality AI
Supply chain & demand forecasting
Recruitment & HR screening tools
Connected vehicle data processing
Internal manufacturing copilots
Primary exposure
ADAS/autonomous AI without type approval alignment
HR screening AI under Annex III high-risk scope
Vehicle data AI with GDPR compliance gaps
Supply chain AI without traceability
Product liability exposure from AI-influenced safety decisions
Manufacturing copilots with no documented oversight
What governance must prove
UNECE WP.29 CSMS/SUMS alignment for in-vehicle AI
Annex III classification for HR and screening AI
Product liability traceability for safety-relevant AI
GDPR compliance for connected vehicle data
Quality system integration for AI in manufacturing
Cyber Resilience Act readiness for AI components
EU AI Act UNECE WP.29 Product Liability Directive Cyber Resilience Act GDPR
Request automotive AI review →
🛒
Industry 06 · eCommerce & Retail
eCommerce & Retail
eCommerce and retail AI governance is defined by the intersection of personalisation systems, pricing AI, recommendation engines, and the EU’s Digital Services Act and AI Act. The scale of customer exposure — millions of interactions per day — means that even small governance gaps can create systemic conduct risk or regulatory exposure. Dark patterns, profiling, and dynamic pricing all attract specific legal attention.
Regulatory Pressure
EU AI Act · DSA · GDPR · Unfair Commercial Practices Directive · Consumer AI rules
Supervisory Bodies
European Commission · Data Protection Authorities · National Consumer Authorities
Risk Level
High
Key AI systems
Recommendation & personalisation engines
Dynamic pricing & demand forecasting AI
Customer service chatbots
Fraud detection & payment risk AI
Search ranking & content curation
Returns prediction & logistics optimisation
Primary exposure
Personalisation AI exploiting vulnerable consumers
Dynamic pricing creating unfair commercial practices
Profiling AI without GDPR legal basis
Chatbots creating binding representations
DSA recommender system transparency obligations
Dark patterns enabled or amplified by AI
What governance must prove
DSA-compliant recommender system transparency
GDPR lawful basis for personalisation profiling
Pricing AI fairness and non-discrimination controls
Chatbot escalation and output control logic
Fraud AI explanation for account restrictions
Consumer-facing AI disclosure obligations
EU AI Act Digital Services Act GDPR Unfair Commercial Practices Dir. Consumer Rights Dir.
Request eCommerce AI review →
Industry 07 · Energy & Utilities
Energy & Utilities
Energy sector AI sits at the intersection of critical infrastructure protection obligations, grid stability systems, and the emerging Digital Operational Resilience requirements applying to operators of essential services. AI governance in energy is not primarily about commercial risk — it is about systemic resilience, operational safety, and the political sensitivity of infrastructure failure.
Regulatory Pressure
EU AI Act (critical infrastructure) · NIS2 · GDPR · Internal Electricity Market Directive
Supervisory Bodies
ACER · National Energy Regulators · Cybersecurity Authorities (NIS2)
Risk Level
High
Key AI systems
Grid management & load forecasting AI
Predictive maintenance for infrastructure
Anomaly detection & cybersecurity AI
Energy trading & price optimisation
Customer billing & demand response AI
Renewable energy output prediction
Primary exposure
Grid AI failure classified as critical infrastructure risk
NIS2 obligations for AI in essential service systems
Trading AI without explainable decision logic
Predictive maintenance AI with uncontrolled vendors
Cybersecurity AI with opaque detection thresholds
Customer AI systems without GDPR-compliant basis
What governance must prove
EU AI Act Annex III critical infrastructure classification
NIS2-compliant risk management for AI in OES systems
Human oversight for grid-critical AI decisions
Incident response for AI-related operational failures
Third-party AI vendor security and resilience controls
Audit readiness for energy regulator inquiries
EU AI Act Annex III NIS2 Directive GDPR Cyber Resilience Act ACER Regulations
Request energy AI review →
🚚
Industry 08 · Logistics & Transport
Logistics & Transport
Logistics AI governance sits at the intersection of workforce management AI, route optimisation, predictive demand systems, and increasingly autonomous vehicle and warehouse robotics. The key governance challenge is the combination of employment-related AI (which may trigger Annex III obligations), safety-critical automation, and cross-border data flows that span multiple jurisdictions.
Regulatory Pressure
EU AI Act · Labour Law · GDPR · ADR (transport) · Worker Monitoring Rules
Supervisory Bodies
Labour Authorities · Data Protection Authorities · Transport Regulators
Risk Level
High
Key AI systems
Route & delivery optimisation AI
Worker monitoring & performance scoring
Demand forecasting & warehouse AI
Autonomous vehicle & robot systems
Customs & compliance screening AI
Supplier risk & disruption detection
Primary exposure
Worker monitoring AI triggering Annex III obligations
Performance scoring with discriminatory impact
Autonomous systems without safety validation
Cross-border data flows without adequate safeguards
AI-driven dismissal or scheduling without transparency
Customs AI with opaque risk classification logic
What governance must prove
Annex III check for employment & workforce AI
Worker information and transparency obligations
Safety validation for autonomous operational systems
GDPR-compliant worker monitoring data flows
Appeals and override logic for AI-driven decisions
Third-country data transfer safeguards
EU AI Act Annex III GDPR Transparency in AI Decisions Dir. Labour Law Cyber Resilience Act
Request logistics AI review →
🏛
Industry 09 · Public Sector & Government
Public Sector
Public sector AI governance faces the highest democratic accountability burden of any sector. Administrative AI, law enforcement AI, and social benefit systems carry direct fundamental rights implications. The EU AI Act places public administration AI under some of its most demanding requirements, and national administrative law adds additional layers of procedural obligation that do not apply in the private sector.
Regulatory Pressure
EU AI Act (highest scrutiny) · Charter of Fundamental Rights · GDPR · Administrative Procedure Law
Supervisory Bodies
Data Protection Authorities · Parliamentary Oversight · Courts · AI Office
Risk Level
Critical
Key AI systems
Administrative decision support AI
Benefits eligibility & fraud detection
Law enforcement & border control AI
Biometric identification systems
Predictive policing & crime analysis
Citizen services chatbots & portals
Primary exposure
Biometric AI restricted or prohibited under EU AI Act
Predictive policing AI with fundamental rights impact
Benefits AI without adequate human review
Administrative AI decisions without legal basis
Algorithmic discrimination in public services
Procurement of AI from vendors without governance clarity
What governance must prove
Fundamental rights impact assessment
Legal basis for each administrative AI application
Human review for decisions affecting citizen rights
Prohibited practice check (Art. 5 EU AI Act)
Transparency and appeal rights for affected citizens
Procurement obligations for AI vendors
EU AI Act Art. 5 & Annex III EU Charter Art. 21–22 GDPR Law Enforcement Directive Administrative Procedure Law
Request public sector AI review →
Industry 10 · Manufacturing & Industry
Manufacturing
Manufacturing AI governance is shaped by quality system obligations (ISO 9001, IATF, GMP), safety standards, and the Machinery Regulation’s coverage of AI-embedded equipment. Enterprise AI adds a second layer: HR screening, internal copilots, and supply chain AI all carry standard governance obligations. The key challenge is integrating AI governance into existing quality management systems rather than running them in parallel.
Regulatory Pressure
EU AI Act · Machinery Regulation · Product Liability · GDPR · ISO 9001 · IATF
Supervisory Bodies
Market Surveillance Authorities · Notified Bodies · Labour Authorities
Risk Level
High
Key AI systems
Quality control & defect detection AI
Predictive maintenance systems
Production planning & scheduling AI
Worker safety & ergonomics AI
Supply chain & sourcing optimisation
AI-embedded machinery & cobots
Primary exposure
AI in machinery without Machinery Regulation compliance
Quality AI without quality system integration
Worker monitoring AI triggering Annex III
Predictive maintenance AI with uncontrolled vendors
Product liability from AI-influenced quality decisions
HR & recruitment AI without governance controls
What governance must prove
Machinery Regulation CE compliance for AI-embedded equipment
Integration of AI into quality management system
Change control for AI in production-critical systems
Safety risk assessment for human-robot collaboration
Annex III check for employment-related AI
Product liability traceability for AI in quality decisions
EU AI Act Machinery Regulation 2023/1230 Product Liability Dir. ISO 9001 / IATF GDPR
Request manufacturing AI review →
📡
Industry 11 · Telecom & Technology
Telecom & Technology
Telecom and technology companies face a dual governance challenge: as AI deployers in their own operations, and as infrastructure providers whose systems underpin other sectors‘ AI. Network management AI, customer analytics, and churn prediction systems carry enterprise obligations, while platform-level AI may trigger the Digital Services Act’s systemic risk rules in parallel with the AI Act.
Regulatory Pressure
EU AI Act · DSA · GDPR · ePrivacy · NIS2 · European Electronic Communications Code
Supervisory Bodies
BEREC · NRAs · Data Protection Authorities · European Commission (DSA)
Risk Level
High
Key AI systems
Network management & fault prediction AI
Customer analytics & churn prediction
Fraud detection & SIM-swap prevention
Content moderation & spam detection
Pricing & plan recommendation AI
Internal productivity & support copilots
Primary exposure
ePrivacy violations in AI-driven analytics
Profiling for commercial targeting without clear basis
Content moderation AI under DSA Art. 34 risk rules
NIS2 obligations for network-critical AI systems
Recommendation AI with systemic risk implications
Employee-facing AI without internal governance
What governance must prove
ePrivacy-compliant basis for analytics and profiling
DSA systemic risk assessment for large platforms
NIS2 resilience controls for network AI
Content moderation AI transparency reporting
Customer data AI explainability and portability
Annex III check for employment AI in tech companies
EU AI Act DSA ePrivacy Directive NIS2 GDPR
Request telecom AI review →
📷
Industry 12 · Media & Platforms
Media & Digital Platforms
Media and platform AI governance is shaped by three converging pressures: the EU AI Act’s rules on manipulation and subliminal influence, the Digital Services Act’s systemic risk obligations for very large platforms, and the European Media Freedom Act. AI-generated content, recommendation engines, and personalisation systems all carry disclosure, transparency, and risk assessment obligations that are now legally enforceable — not voluntary.
Regulatory Pressure
EU AI Act Art. 50 · DSA · EMFA · GDPR · Copyright Directive
Supervisory Bodies
European Commission · Digital Services Coordinators · ERGA · Data Protection Authorities
Risk Level
High
Key AI systems
Recommendation & content curation AI
AI-generated content tools (text, image, video)
Personalisation & engagement optimisation
Automated content moderation
Deepfake detection & synthetic media tools
Ad targeting & audience segmentation AI
Primary exposure
AI-generated content without mandatory disclosure
Recommendation AI using subliminal manipulation (Art. 5)
DSA systemic risk for VLOP recommendation systems
Deepfake/synthetic media without labelling
Ad targeting AI with discriminatory micro-targeting
Copyright exposure from AI training and generation
What governance must prove
Art. 50 disclosure for AI-generated content
DSA Art. 34 systemic risk assessment (VLOPs)
Recommender system transparency and opt-out
Synthetic media labelling obligations
Prohibited manipulation check under Art. 5
Copyright compliance for AI training datasets
EU AI Act Art. 5 & 50 DSA EMFA GDPR Copyright Directive
Request media AI review →
Governance Matrix

Cross-industry AI risk overview.

A comparative view of where key AI governance dimensions sit across industries. This is orientation logic, not a legal assessment.

Industry EU AI Act Risk Customer Impact Regulatory Density Employment AI Autonomous Systems Data Sensitivity
Banking & Payments
Critical
Very High
Very High
Medium
Emerging
Very High
Insurance
Critical
Very High
High
Medium
Low
High
Pharma & Life Sciences
Critical
High
Very High
Medium
Emerging
Very High
Healthcare
Critical
Very High
Very High
Medium
Growing
Very High
Automotive & Mobility
High
High
High
High
Very High
High
eCommerce & Retail
High
Very High
Medium–High
Medium
Growing
High
Energy & Utilities
Critical
Medium
High
Medium
Growing
High
Logistics & Transport
High
Medium
Medium–High
Very High
Growing
High
Public Sector
Critical
Very High
Very High
High
Emerging
Very High
Manufacturing
High
Medium
Medium–High
High
High
Medium
Telecom & Technology
High
High
High
Medium
Emerging
High
Media & Platforms
High
Very High
Medium–High
Low–Medium
Low
High

Matrix reflects general governance orientation logic, not legal classification. Risk levels are approximations for executive framing purposes only.

The Governance Lifecycle

The same four steps apply
across every industry.

What changes between sectors is the classification logic, the regulatory reference frame, and the specific control expectations. What never changes is the need to identify, approve, monitor, and re-review.

Identify & Classify
Map actual AI systems in use. Classify by type, intended purpose, customer or employee impact, regulatory context, and third-party dependency.
Approve & Document
Every productive AI system needs a documented approval path — through the governance functions relevant in your industry — before go-live or before scale.
Monitor & Log
Logging, output tracking, incident escalation, human override points, and ongoing monitoring are non-optional in any regulated sector.
Re-review & Defend
Changes to model, vendor, data source, interface, or purpose must trigger renewed governance review. The system that existed at approval is not the system you have today.
Positioning

For leadership that wants to know
what AI governance means
in their specific sector.

AI governance is not the same across industries. The frameworks are shared. The exposure is sector-specific.

A generic AI governance assessment does not tell a pharma board what GxP means for AI validation. It does not tell an insurance CRO what Annex III means for their pricing engine. It does not tell a hospital whether their diagnostic AI needs MDR re-classification.

The real value is knowing where your industry’s specific exposure sits — and whether the controls you have are proportionate to that exposure, not to a generic standard.

„Every sector uses AI. Every sector has different obligations. The governance gap is always between what you have deployed and what you can actually defend in your specific regulatory context.“

— Patrick Upmann
Boards & Executive Committees
Who need to understand where AI exposure concentrates in their specific sector before scrutiny begins.
Risk, Compliance & Legal
Who need a clearer view of which systems in their sector may trigger high-risk classification, and what the governance evidence needs to look like.
Regulated sector operations leaders
In banking, insurance, pharma, healthcare, energy, and public administration — where AI is already embedded in core processes.
Companies scaling AI across industries
Who need to ensure the governance architecture keeps pace with deployment — and that sector context is reflected, not ignored.
Board AI Clearance™

Assess your AI governance
position — in your
specific sector.

The decisive question is not whether your organisation uses AI. The decisive question is whether the governance structure around that use reflects the actual regulatory pressure your sector creates.

First step: identify which systems in your industry context already create exposure without a defensible governance and evidence architecture behind them.

Request a conversation
A direct discussion about your industry, your AI systems, and where governance exposure currently concentrates.