Banken – AI Governance Architect


Banking AI Governance — Master Matrix
Layer Model / Component Banking Use Case Decision Impact EU AI Act Classification Article / Annex Deadline Primary Risk Regulatory Exposure Mandatory Controls Defensibility Score
1
LLM
e.g. GPT-4, Claude
Customer chat, advisory, document analysis Indirect GPAI / Limited Risk Art. 52 + GPAI Regulation 2025–2026 Hallucination, misinformation High Output filter Logging Prompt control Disclosure obligation 2/5
2
RAG System
Retrieval-Augmented Generation
Internal data, policies, regulatory Q&A Indirect → high Depends on use case Art. 10, 15 2026 False/stale data, data leak High Data governance Access control Versioning Audit logging 2/5
3
Decision Model
ML scoring, rule engine
Credit, fraud detection, AML Direct HIGH RISK Art. 6 + Annex III (No. 5b) Aug 2026 Wrong decision, discrimination Very high Model validation Human review Bias testing Explainability (XAI) Conformity assessment 3/5
4
Chatbot (E2E)
End-to-end advisory system
Customer interaction, product recommendation Decision-influencing HIGH RISK (de facto) Art. 6, 9, 12 + GDPR Art. 22 Aug 2026 Misdirection, manipulation, liability ⬤ Extreme Decision logging Audit trail Human review Opt-out right 1/5
5
Prompt / Orchestration
Agentic layer, LLM chains
System control, multi-step processes Indirect → systemic Art. 9 (implicit) Art. 9, 17 2026 Prompt injection, jailbreak, escalation High Versioning Governance Injection tests Sandbox isolation 1/5
6
Guardrails
Output filter, safety layer
Filter for all model outputs Indirect (protective) Art. 15 (Robustness) Art. 15, 9 2026 Circumvention, false pos./neg. Medium Monitoring Red-team testing Alerting Regular updates 2/5
7
Vendor (Cloud AI)
Third-party models
External models, API services Systemic GPAI + DORA GPAI Reg. + DORA Art. 28–44 2025–2026 Outsourcing risk, dependency High Vendor risk mgmt Exit strategy SLA monitoring Data protection audit 1/5
8
Embeddings / Vector Data
Semantic search layer
Similarity search, customer segmentation Indirect → high Depends on use case Art. 10 + GDPR 2026 Data leak, re-identification, bias High DPIA Anonymisation Access control 2/5
9
Monitoring / Observability
MLOps, drift detection
Model surveillance, performance control Preventive / controlling Art. 9, 72 (Post-market) Art. 9, 72, 73 2026 Model drift, undetected errors Medium Drift alerting Performance dashboards Incident response 3/5
Defensibility Score: 1 = Critical gap 2 = Partially covered 3 = Solid baseline 4 = Well defended 5 = Fully compliant

AIGN OS · AI Governance Architecture

From AI Systems to
Decision Defensibility

AI Governance is no longer a documentation exercise. It is a systems architecture challenge. This model shows how organisations connect AI systems, decisions, controls, ownership and audit evidence.

Source & Integration

All AI systems, APIs and tools are connected and made visible.

AI Inventory

All AI use cases are centrally registered and classified.

Decision Trace

Every AI-driven action becomes traceable and reconstructable.

Control Mapping

System behaviour is mapped to governance rules and compliance requirements.

Gap Detection

Missing controls and risks are automatically identified.

Ownership

Each issue is assigned to a responsible owner.

Audit & Defensibility

All decisions are documented and defensible under audit.

AI Governance System
01 Source & Integration
02 AI Inventory
03 Decision Trace
04 Control Mapping
05 Gap Detection
06 Ownership
07 Audit Layer
Visibility of AI systems