AI-BOK Reference Architecture ============================ This ArchiMate model is the architecture translation of the AI Body of Knowledge (AI-BOK) v1.0, written by Jan Willem van Veen (ArchiXL). The model provides a complete reference framework for AI governance and architecture, structured according to the ArchiMate 3.2 metamodel across all layers: Elements: 668 Relations: 1056 Views: 48 Content per layer: Motivation: 76 principles, 16 requirements, 14 goals, 41 standards, 7 stakeholders, 10 drivers, 52 AI concepts (SKOS) Strategy: 12 capabilities, 5 value stream stages, 6 maturity approaches Business: 12 functions, 29 roles, 17 processes, 7 quality gates, 28 business objects, 88 actors/vendors Application: 23 ABBs, 16 services, 10 meta-model objects Technology: 109 SBBs (concrete products and frameworks) Impl/Migr: 4 roadmap phases, 3 gaps, 5 maturity levels Core architecture: the Cognition Plane as an independent architecture layer alongside the Control Plane and Data Plane, connected via services. Three foundations: Semantic Precision, Reliable Knowledge Sources, Explicit Governance. Views are organised in 6 folders: 01 Overview — Navigator, Capability Map, Layered View 02 Business - Roles, Lifecycle, Value Stream, RACI, Processes 03 Application — Three Planes, ABBs, SBBs, Application Cooperation 04 Motivation - Stakeholders, Principles, Requirements, Standards 05 Maturity - Maturity, Quality, Roadmap 06 Knowledge Areas - Detail per KA (KG1-KG12) Source references: each element has a dct:source property with page reference to the AI-BOK v1.0 publication. Author: Jan Willem van Veen / ArchiXL Licence: Open source, free to use as reference Website: https://www.ai-bok.nl Version: 1.0 (April 2026) Create conditions under which AI systems deliver maximum value within acceptable risk boundaries. Maximum organisational value from AI investments through systematic selection and prioritisation. A coherent, repeatable and evolvable blueprint for AI capabilities. Predictable, repeatable and governable progress of AI initiatives. Reliable, traceable and semantically anchored data foundation for AI. Reliable, reproducible and responsible model portfolio. AI interactions that are user-centred, reliable and inclusive. Reliable, observable and cost-efficient AI operations in production. Proactively identify, assess and mitigate AI-related risks. Verifiable, risk-based compliance of AI systems. Ethical considerations structurally embedded in the design and deployment of AI. AI systems produce grounded, traceable and semantically vamemberated output. Identify AI opportunities and assess business value. Architecture, data preparation and model selection. Training, fine-tuning, prompt engineering and evaluation. Deployment, integration, monitoring and operations. Measurable business value and continuous improvement. Establishing and enforcing AI policy, principles and mandates. Selection, prioritisation and monitoring of AI initiatives. Designing and maintaining the AI reference architecture. Guiding AI initiatives through all phases of the lifecycle. Ensuring data quality, semantic precision and knowledge sources. Registration, vamemberation and version management of AI models. Design and optimisation of AI user interactions. Deployment, monitoring and operational reliability. Identification, assessment and mitigation of AI risks. Ensuring compliance with laws and regulations. Ensuring ethical principles in AI systems. Curation and governance of knowledge sources for AI grounding. Overview of all AI initiatives with status and priority. Justification of AI initiative with expected value and risks. Curated collection of prompt templates and strategies. Register of identified AI risks with mitigation. Public register of deployed AI systems (transparency). Register of authoritative knowledge sources with classification. Register of unapproved AI applications in the organisation. Multi-year AI strategy document, reviewed annually. Rolling 12-18 month roadmap for AI initiatives. Plan for talent, technology, data and budget. Semi-annual overview: adopt, trial, assess, hold. Catalogue of integration patterns (MCP, A2A, governance hooks). Catalogue of AI interaction patterns and guidelines. Register with authority levels and metadata per knowledge source. Organisation-wide AI policy with principles, roles and mandates. Formal specification of autonomy boundaries and escalation rules per agent. Formal document with AI policy, principles, roles and mandates. Agreement on data quality and delivery between systems. Blueprint for AI capabilities including Cognition Plane. Authoritative concept framework (SKOS/NL-SBB) for AI systems. Standardised documentation of an AI model. Documentation of dataset per Datasheets for Datasets. Checklist for EU AI Act compliance per risk class. Results of adversarial testing of AI systems. Results of bias analysis and fairness assessment. Complete inventory of components, models, data and dependencies. Documentation of architecture decisions with rationale. Documentation of ML experiments with parameters and results. Documentation of responsible decommissioning of AI systems. Matrix documenting responsibilities per AI role and KA. Report on conformity to the AI reference architecture. Per model: frequency, vamemberation criteria, approval process. Checklist per lifecycle phase with criteria and deliverables. Protocol for responsible decommissioning of AI systems. Documentation of data provenance and transformations. Report on data quality per dataset. Protocols and escalation specifications per risk level. Rapport bij gedetecteerde data- of conceptdrift. Incidentrapport met root-cause-analyse. Plan voor AI-specifieke incidentrespons. Organisation-specific ethical framework and code of conduct for AI. Per AI system and decision type. Report on currency and refresh schedule of knowledge sources. Documentation of lessons learned during evolution or decommissioning. Strategic decision-making. Tactical quality gate reviews. Advisory on ethical dilemmas. Risk overview and acceptance. Central knowledge centre for AI expertise, best practices and reusable components. American AI company, creator of GPT-4 and ChatGPT. AI-veiligheidsbedrijf, maker van Claude. Technologiebedrijf, maker van open-source LLaMA-modellen. Technology company, creator of Gemini and TensorFlow. Cloud AI-platform met Vertex AI. People + AI Research initiative. Enterprise AI platform for NLP and embeddings. Technologiebedrijf, Azure AI en enterprise-integratie. Cloud provider with Bedrock, SageMaker and AI services. GPU- en AI-infrastructuurleverancier. Framework for LLM applications, RAG and agents. Multi-agent orkestratie framework. Data framework for LLM applications and RAG. Managed vector database voor AI-applicaties. Maker van Weaviate open-source vector database. Open-source vector zoekmachine. Open-source embedding database. Marktleider in graph databases. Semantic graph database en kennistechnologie. Open-source softwarestichting, o.a. Jena triple store. Enterprise knowledge graph platform. Academic ontology editor Protege. Netherlandss bedrijf, linked data platform TriplyDB. Unified data analytics en AI-platform. Cloud data warehouse platform. Open-source ML lifecycle management. ML experiment tracking en model management. ML pipeline orkestratie op Kubernetes. Experiment tracking voor ML-teams. Open-source MLOps platform. Data en model version control. Open-source AI-community, model hub en tooling. Output validation and filtering for LLMs. LLM vulnerability scanning toolkit. LLM evaluatie en red teaming tool. RAG-evaluatie framework. LLM-evaluatieplatform DeepEval. AI quality en observability platform. Center for Research on Foundation Models. Open-source AI-onderzoekslaboratorium. Open-source monitoring en alerting. Observability platform, dashboards en logging. Cloud monitoring en analytics platform. Zoek- en observability platform (ELK Stack). Open-source observability framework. Distributed tracing systeem. ML monitoring en data drift detectie. AI observability en data monitoring. Post-deployment ML performance monitoring. LLM observability en token tracking. LLM inference cost benchmarking. Prompt versiebeheer en evaluatie. LLM-applicatie ontwikkelplatform. API gateway en service mesh. Unified LLM API proxy en load balancer. Unified API voor meerdere LLM-providers. Open-source identity en access management. Enterprise identity platform. Unified policy engine, policy-as-code. Policy language voor autorisatielogica. AI Fairness 360 toolkit voor biasdetectie. Open-source bias audit toolkit. SHapley Additive exPlanations voor model interpretatie. Local Interpretable Model-agnostic Explanations. Data validation and documentation framework. Data quality monitoring platform. Maker van Label Studio, open-source annotatie. Creator of spaCy and Prodigy annotation tool. Open-source ML framework en serving. Open-source deep learning framework. Code hosting en CI/CD platform. DevOps platform met geïntegreerde CI/CD. Open-source automation server. Containerisatieplatform. Container orkestratie standaard. Kubernetes package manager. Infrastructure-as-code (Terraform). Infrastructure-as-code in programmeertalen. Samenwerkingsplatform, maker van Confluence. Open-source feature store voor ML. Enterprise feature platform voor ML. Open-source data lineage standaard. Dutch architecture firm, specialist in enterprise and information architecture. Creator of WikiXL, ArchiMedes and BegrippenXL. Ultimately responsible for AI strategy, governance and value creation. Chair of the AI Board and guardian of AI-BOK implementation. Manages the AI portfolio: selection, prioritisation and monitoring of AI initiatives based on business value and strategic fit. Advises on ethical aspects of AI systems: fairness, bias, transparency and societal impact. Member of the AI Ethics Committee. Responsible for functional steering of an AI initiative: requirements, prioritisation and acceptance from the business perspective. Designs the AI reference architecture: planes, ABBs, integration patterns and technology choices. Ensures coherence and reusability. Identifies, assesses and mitigates AI-related risks. Manages the AI risk register and advises on risk classification (EU AI Act). Operationalises AI governance: policy frameworks, mandates, quality gates and compliance processes. Member of the AI Review Committee. Curates and manages knowledge sources for AI grounding: terminology frameworks, knowledge graphs and document collections. Develops, trains and optimises AI models: data engineering, model training, fine-tuning, prompt engineering and evaluation. Designs user interaction with AI systems: conversation flows, explainability, feedback mechanisms and human-in-the-loop patterns. Responsible for deployment, monitoring and operational management of AI systems in production. Monitors SLAs and performance. Evaluates and vamemberates AI systems for quality: faithfulness, relevance, bias, robustness and conformity to quality gates. Performs independent audits on AI systems: compliance, technical conformity, data quality and governance adherence. Manages the technical AI infrastructure: platforms, API gateways, model registries, monitoring tools and access control. Prepares training data: labelling, annotation, quality control and vamemberation of datasets for supervised learning. Provides domain knowledge for AI systems: output vamemberation, curation of knowledge sources and assessment of domain-specific quality. Guides organisational change in AI adoption: stakeholder management, training, communication and culture change. Manages autonomous AI agents: defines mandates, escalation rules, autonomy boundaries and monitors agent behaviour in production. Ensures compliance with AI legislation: EU AI Act, GDPR, sector-specific requirements and reporting obligations. Designs and manages knowledge graphs and ontologies for AI grounding. Designs and optimises RAG pipelines and retrieval strategies. Specialist in bias-detectie, fairness-metrics en debiasing-technieken. Designt en implementeert uitlegbaarheidsmechanismen (SHAP, LIME). Beveiligingsarchitectuur voor AI-systemen, prompt injection preventie. Builds and manages the AI platform (MLOps, serving, monitoring). Selects and qualifies knowledge sources for RAG pipelines. Expert in terminologieraamwerken, SKOS, linked data voor AI. Designt concrete AI-oplossingen binnen de referentiearchitectuur. Data Protection Officer, oversight of GDPR compliance in AI. Identification of AI opportunities, feasibility analysis and alignment with business strategy. Deliverable: validated AI business case. Architecture design, model selection, data preparation and integration patterns. Deliverable: approved architecture blueprint. Collection, cleaning, labelling and validation of training and evaluation data. Deliverable: qualified datasets. Training, fine-tuning, prompt engineering and experimentation. Deliverable: trained and validated model with model card. Quality evaluation on faithfulness, bias, robustness and conformity to quality gates. Deliverable: evaluation report and go/no-go. Deployment to production, integration with business processes and user acceptance. Deliverable: operational AI system. Continuous monitoring of performance, drift, costs and anomalies. Triggers retraining loop on degradation. Deliverable: operational logs and alerts. Evaluation of continued use, knowledge transfer and responsible decommissioning. Deliverable: lessons learned and decommissioning report. AI Architecture as 5th layer (Cognition Plane). AI Architect on Enterprise Architecture Board. KA5/KA12 extend data governance. Terminology frameworks in metadata management. AI systems in CMDB. AI incidents via existing incident management + AI classification. AI risk register integrated in enterprise risk register. Three Lines of Defence with AI-specific controls. DPIA for AI systems. Art. 22 right to explanation operationalised. Privacy by design. Gate criteria as Definition of Done per PI. MLOps pipeline as CD Pipeline equivalent. P7 → P4: Performance degradation triggers retraining. P5 → P4: Gezakte evaluatie keert terug naar modelontwikkeling. P8 → P1: Fundamental changes restart the full cycle. Assessment per dataset on bias and representativeness. Risk assessment of external models, datasets, services. Immutable log of AI decisions and interactions. Inference endpoint for language models. Provides text, image and code generation via foundation models. Orchestration of autonomous AI agents, task decomposition, tool use and multi-agent coordination. Retrieval-Augmented Generation: retrieves relevant context from knowledge sources and feeds it to the LLM. Central management of prompt templates, system messages, version control and effectiveness metrics. Input/output filtering, content moderation, prompt injection detection and safety controls. Automated quality evaluation of models: faithfulness, relevance, hallucination rate. Identity and authorisation for AI agents, users and systems. Agent identities (NHI). Policy rules for AI authorisation, policy-as-code. Runtime governance checks for agents. Immutable logging of AI decisions, interactions and agent actions. Provenance and traceability. Central registry of models, versions, model cards, experiments and deployments. Access control, rate limiting, routing and load balancing for AI services and LLM endpoints. Storage and similarity search of embeddings for RAG and semantic search. Knowledge graph for structured knowledge: entities, relations, ontologies (RDF/OWL/SKOS). Standardised storage and serving of ML features for training and inference. Central storage for training data, evaluation data and analytics. Document storage for RAG sources: policy documents, manuals, knowledge articles. Continuous monitoring of AI systems: performance, costs, drift, anomalies and SLA monitoring. Detection and mitigation of bias in models and data. Fairness metrics and debiasing techniques. Uitlegbaarheidsmechanismen voor AI-beslissingen: feature attribution, counterfactuals, attention maps. Geautomatiseerde datakwaliteitsVamemberation: volledigheid, consistentie, actualiteit, biasdetectie. Tooling for data annotation, labelling and curation. Support for RLHF and domain expert validation. Continuous integration/delivery pipelines, containerisatie, infrastructure-as-code voor AI-workloads. Production-serving van ML-modellen: batching, caching, GPU-optimalisatie, A/B-testing. Basis retrieve-then-generate patroon. Pre-retrieval optimalisatie, post-retrieval reranking. Modulaire pipeline met verwisselbare componenten. Knowledge Graph-gebaseerde retrieval met entity-relatie context. Agent-driven retrieval with tool use and iterative search strategies. Vector similarity search and document retrieval for RAG pipelines. SPARQL and graph queries on knowledge graphs and ontologies. Geautomatiseerde datakwaliteitsVamemberation en -rapportage. On-demand serving of ML features for training and inference. Agent-identiteit, autorisatie en mandaatverificatie. Runtime governance checks: may this agent perform this action? Immutable logging of decisions, interactions and agent actions. Modelselectie, versiebeheer en model card raadpleging. Controlled access to AI endpoints with rate limiting and routing. Continuous monitoring of performance, drift, costs and anomalies. Tekstgeneratie, classificatie, embedding via foundation models. Multi-agent coördinatie, taakdecompositie en tool-gebruik. RAG: grounded answers based on authoritative knowledge sources. Input/output-filtering, Hallucinationpreventie en content moderation. Geautomatiseerde kwaliteitsevaluatie: faithfulness, relevance, fairness. De-pseudonymisation only possible by authorised roles with specific task key. Pseudonymisation layer separating personal data from content data. Task keys link pseudonymised data to tasks, not to persons. European regulation for AI with risk classes. Organisations must leverage AI to remain competitive. Public trust in AI is essential for adoption. Autonomous AI agents require new governance mechanisms. Rapid development of foundation models and agentic AI. Labour market for AI specialists is very tight. Culture determines AI adoption pace. Additional requirements per sector (healthcare, finance, government). AI energy consumption and CO2 emissions are reported. Federated data sharing requires interoperable AI. Create conditions under which AI systems deliver maximum value within acceptable risk boundaries. Maximum organisational value from AI investments through systematic selection and prioritisation. A coherent, repeatable and evolvable blueprint for AI capabilities. Predictable, repeatable and governable progress of AI initiatives. Reliable, traceable and semantically anchored data foundation for AI. Reliable, reproducible and responsible model portfolio. AI interactions that are user-centred, reliable and inclusive. Reliable, observable and cost-efficient AI operations in production. Proactively identify, assess and mitigate AI-related risks. Verifiable, risk-based compliance of AI systems. Ethical considerations structurally embedded in the design and deployment of AI. AI systems produce grounded, traceable and semantically vamemberated output. AI is deployed in a responsible, transparent and safe manner. Full compliance with EU AI Act and relevant standards. Unambiguous definitions of the concepts AI uses. Authoritative, curated information for grounding. Clear rules, roles and mandates. Every AI system has a designated owner responsible for functioning, impact and policy compliance. Governance intensity is proportional to the risk classification of the AI system. The degree of autonomous operation is documented and visible. Governance mechanisms are part of the AI architecture, not an afterthought. Human-in-the-loop or human-on-the-loop is built in for decisions with significant impact. Governance is not static but is periodically adapted. AI initiatives are selected based on contribution to business objectives. The AI portfolio is deliberately balanced between explorative and exploitative. What counts is not the number of AI projects but the realised value. AI strategy without capacity planning is a wish list. Strategy is reviewed annually; portfolio at least quarterly. Strategy addresses the cognition plane as a new architecture layer. AI-systemen moeten concepten eenduidig interpreteren via terminologieraamwerken. Authorities and escalation rules for AI agents are explicit in the architecture. Architecture separates data, knowledge, logic and governance as four layers. Every AI output must be traceable to sources, models and reasoning steps. AI components communicate via MCP and A2A. Vendor lock-in is minimised. Architecture is modular and loosely coupled so that components are replaceable. Personal data is never unnecessarily exposed to AI components. Each lifecycle phase has its own quality criteria, roles and deliverables. Production observations structurally flow back to earlier phases. Every experiment, training and deployment must be repeatable. The lifecycle is driven by business value, not just technical feasibility. End of lifecycle is as important as the beginning: audit trail, data retention, knowledge transfer. For agentic AI, additional lifecycle requirements apply per phase. Data value for AI is determined by unambiguity, not by volume. AI-systemen verankeren antwoorden in traceerbare, autoritatieve kennisbronnen. Every AI output is traceable to data sources and semantic definitions. Data quality is an ongoing process of measurement, detection and improvement. Bias detection is built into every phase of the data pipeline. Semantische modellen gebruiken SKOS, RDF, OWL, SHACL. Every production model is registered in a central model registry. No model to production without a model card. Every model version must be reproducible. Modellen worden periodiek getest op ongewenste biases. The choice is an architecture decision with consequences. Models are managed as a portfolio: overlap and risks are visible. Every AI interaction is designed from the perspective of task, context and knowledge level. Users form a realistic picture of what AI can and cannot do. The system proactively communicates about sources and uncertainties. AI interfaces are usable for users with diverse abilities. The user retains control over the decision-making process. Interactionpatronen worden continu geëvalueerd en verbeterd. AI systems function with the same reliability as business-critical applications. Performance, costs, quality and deviations are continuously measured. Every deployment is reproducible: models, configurations and prompts are versioned. CI/CD pipelines for maximum automation from model to production. AI-inferentiekosten worden actief gemonitord en geoptimaliseerd. New versions are rolled out in phases (canary, A/B, blue-green). Risk management intensity is proportional to the risk level. Risico's worden vóór deployment geïdentificeerd en geadresseerd. Risk profiles change; monitoring is continuous, not one-off. AI safety is an organisational culture, not a checklist. EU AI Act risk classification is the minimum framework. Risk assessments are documented and available for audit. Compliance is only real when verifiable and reproducible. Compliance effort scales with risk level. Audits worden uitgevoerd door functioneel onafhankelijke partijen. Audittrails worden continu vastgelegd, niet achteraf gereconstrueerd. Organisaties publiceren actief in het Algorithmregister. AI Compliance integreert met enterprise governance en informatiebeveiliging. Fairness is a testable property with standardised metrics. Stakeholders have the right to an understandable explanation of AI decisions. Human-in/on/in-command per AI system based on risk and impact. Organisaties beoordelen ook bredere maatschappelijke consequenties. AI systems are designed in line with organisational and societal values. Ethical considerations are structurally integrated into the design process. AI output based on verified, authoritative sources. Knowledge sources are classified by reliability and authority. Prevention of factually incorrect output is a hard system requirement. Mechanisms for staleness detection are structurally embedded. Every knowledge source and AI output is validated against terminology frameworks. Knowledge is unlocked at the source and shared according to data spaces principles. W3C standard for terminology frameworks and thesauri. Standard for Describing Concepts. W3C Resource Description Framework for linked data. W3C Web Ontology Language for ontologies. W3C Shapes Constraint Language for validation. W3C query language for RDF data. W3C Data Catalog Vocabulary for dataset metadata. Dutch profile for data catalogue. Linked data serialisation format. Provenance ontology for origin registration. Metadata standard for information sources. AI Management System standard. AI Risk Management Framework. European regulation for AI governance. European privacy regulation. Software product quality characteristics. Adaptive Autonomous Systems Architecture. Enterprise Architecture framework. Architecture modelling language. Information Modelling Metamodel (Geonovum/VNG). Pipeline metadata lineage standard. Agent-tool integration protocol. Agent-agent communication protocol. De facto standard for model documentation. Standard for dataset documentation. Agent metadata specification. Open standard for AI interaction patterns. AI Lifecycle Processes. AI Concepts & Terminology. AI Risk Management. Data Quality Model. Human-Centred Design. Accessibility standard. Findable, Accessible, Interoperable, Reusable. Standard for decision models. Semantics of Business Vocabulary and Business Rules. Business Process Model and Notation. Open Neural Network Exchange format. REST API specification standard. Cloud financial management framework. Risk Management standard. Assessment of impact and risks per AI system. Ad hoc, no structured AI governance. Basic processes and roles defined. Organisation-wide standards and procedures. Quantitatively managed, continuous monitoring. Continuous improvement, innovation-driven. Policy, decision structures, responsibilities. Lifecycle processes, quality gates, feedback loops. Roles, competencies, training, culture. Tooling, platforms, infrastructure, automation. AI Literacy, innovation mindset, ethical awareness. Assessment of AI maturity across 5 dimensions and 5 levels. Assessment of impact on fundamental rights. Data Protection Impact Assessment specifically for AI systems. Quality score across 5 dimensions per AI system. Completeness, currency, representativeness, labelling, bias, semantic consistency. Accuracy, robustness, reproducibility, generalisability, fairness. Task success, user satisfaction, trust calibration, Hallucinationpercentage. Availability, reliability, scalability, drift detection speed, costs. Compliance ratio, auditability, transparency, ethical review coverage. Governance rules must be machine-readable and runtime-evaluable. All AI decisions and interactions must be immutably logged. Agents must be authorised at runtime based on explicit mandates. AI output must be filtered for safety, correctness and ethics. AI systems must be continuously monitored for performance, drift and costs. AI systems must interpret concepts via authoritative terminology frameworks. Every AI output must be traceable to sources, models and reasoning steps. All production models must be registered with model cards and versions. Models must be periodically tested for undesirable biases. Stakeholders have the right to an understandable explanation of AI decisions. Personal data is never unnecessarily exposed to AI components. AI output must be based on verified, traceable knowledge sources. AI components communicate via standardised protocols (MCP, A2A). Model-to-production must be maximally automated via CI/CD. AI interactions are designed from the perspective of task, context and knowledge level. Data quality is an ongoing process of measurement and improvement. Strategic decision-making on AI deployment and risk acceptance. Responsible for IT/data strategy and AI integration. Owner of the business process supported by AI. Employee who works with AI systems daily. Person affected by AI decisions. External party overseeing AI Compliance (e.g. DPA, ACM). Multidisciplinary team of AI specialists. A technology that, for explicit or implicit purposes, infers how to generate outputs based on received input (such as data). Outputs include predictions, content, recommendations and/or decisions. The technology can learn, reason and perform tasks in a way that normally requires human intelligence. A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. A set of rules and instructions that a computer automatically follows when making calculations to solve a problem or answer a question. A program trained to recognise patterns in data and make predictions. An AI model is the result of training an algorithm on data. An algorithm is a set of instructions, and the model is the specific result of following that set of instructions based on certain data. The way computers learn new things without explicit programming, by learning from labelled data to make predictions. Machine learning process where a model learns from labelled examples (input-output pairs), comparable to a teacher-student scenario. Machine learning proces waarbij AI zelf verborgen patronen in ongelabelde data zoekt zonder expliciete begeleiding. The algorithm learns through punishment and reward. The goal is to score as high as possible in as little time as possible. Step in machine learning to verify model performance using a separate dataset (validation set) not used for training. An advanced form of AI that recognises complex patterns in data via layered neural networks, useful for image recognition, speech processing and language understanding. Vooral gebruikt bij beeldherkenning. Suitable for sequence data such as text and speech. Backbone of modern NLP models such as GPT and BERT. A form of artificial intelligence that can produce (generate) content (text, images and/or varied content such as music) based on the data it is trained on. A specialised type of generative AI model trained on large volumes of text to understand existing content and generate textual content. A type of neural network, pre-trained on large text data, that generates clear and relevant text based on prompts. A technique within artificial intelligence that combines the power of generative AI with information retrieval. This means an AI model not only generates text based on pre-trained data, but can also retrieve relevant information from external knowledge sources to provide more accurate and current answers. The process of carefully choosing and adapting the input (prompt) for a machine learning model to get the best possible output. The basic unit of text (a word or part of a word) processed by LLMs. The degree of randomness in the output of an LLM. Output from generative AI that is factually incorrect or does not match reality, despite appearing semantically correct. In the context of AI, bias refers to the assumptions an AI system makes to simplify the learning process and task execution. AI researchers try to minimise bias, as it can lead to poor results or unexpected outcomes. The way computers learn new things without explicit programming, by learning from labelled data to make predictions. Identifying names, locations and concepts in text. Interpreting whether a text is positive, negative or neutral. An AI agent is a software entity that autonomously executes tasks and makes decisions based on observations, rules and machine learning algorithms. Agentic AI gives the AI system the ability to act autonomously, for example to create files. These can independently execute complex tasks, such as self-driving cars that analyse traffic conditions and act without direct human control. These react directly to input without long-term memory. For example, a chess computer that only analyses the current move. These are multiple AI agents collaborating, for example robots in a factory jointly optimising a production process. The European AI Act is one of the world's first comprehensive laws specifically for artificial intelligence (AI). The AI Act establishes frameworks and requirements for both governments and businesses regarding the development and use of AI systems. Skills, knowledge and understanding enabling providers, deployers and affected persons, taking into account their respective rights and obligations under the AI Act, to make informed use of AI systems and to become more aware of the opportunities and risks of AI and the potential harm it can cause. An AI Detector is a tool designed to detect when a piece of text (or sometimes an image or video) has been created by AI tools (such as ChatGPT and DALL-E). These detectors are not 100% reliable, but they can provide an indication of the likelihood that a text was generated by AI. AI-generated or manipulated image, audio or video material that resembles existing persons, objects, places, entities or events, and would be wrongly perceived by a person as authentic or truthful. An Algorithm Impact Assessment is a tool for making trade-off decisions when deploying algorithms and artificial intelligence. A register with descriptions of algorithmic applications that directly or indirectly have a societal and/or economic effect on citizens or society as a whole. The architecture layer where AI systems reason, interpret and make decisions. Distinguished from the data and control planes by non-deterministic, context-dependent behaviour. Anchoring AI output in verified, authoritative sources so that answers are factually correct and traceable. Unambiguous, formally defined concepts that enable AI systems to interpret concepts correctly. The entirety of policy frameworks, decision structures, responsibilities and mandates with which an organisation governs AI systems. Unapproved use of AI tools and services by employees outside the view of IT and governance. Monitoring shifts in data or model behaviour that degrade the performance of an AI system in production. Programmable boundaries on the input and output of AI models that prevent undesirable behaviour. Formal specification of the autonomy boundaries, escalation rules and context restrictions within which an AI agent may operate. Formal decision point in the AI lifecycle where an AI initiative is assessed against quality criteria before proceeding to the next phase. Standardised documentation format describing what an AI model does, for whom, with what limitations and what performance. Design pattern where a human is actively involved in the decision-making process of an AI system and assesses every output before it is executed. A graph structure representing entities and their interrelationships as a knowledge base for AI systems. A numerical vector representation of text, image or other data that captures semantic meaning in a continuous vector space. Further training a foundation model on domain-specific data to improve performance for a specific task. The practice of designing, developing and deploying AI systems in a manner that is ethical, transparent, fair and responsible. Measurable business value from AI systems within acceptable risk boundaries. AI supports individual tasks, no autonomy. User initiates and controls fully. AI is integrated into business processes, automated steps with human supervision. AI conducts conversations, answers questions, assists with decisions. Limited autonomy. Multiple AI agents collaborate, coordinated by an orchestrator. High governance intensity. AI agents act independently within mandates. Very high governance requirements. Agents from different organisations collaborate. Maximum governance complexity. Inventory existing AI initiatives, assemble AI Board, classify risk, create awareness. Governance framework operational, lifecycle established, first models in registry, basic monitoring. Organisation-wide rollout, cognition plane operational, multi-agent pilots, full compliance. Continuous improvement, federated collaboration, AI-driven innovation, maturity level 4-5. Difference between current situation and first quick wins. Difference between ad-hoc and structured. Difference between basic and organisation-wide rollout. Semantic Precision + Reliable Knowledge Sources + Explicit Governance zijn alle drie nodig. Reasoning, interpretation, decision-making by AI. Identity, authorisation, logging, audit. Data, knowledge, vector stores, knowledge graphs. AI Engineers, AI Operations Engineers — dagelijkse operatie en eerste kwaliteitscontrole. AI Risk Manager, AI Compliance Officer — onafhankelijke controle en risicobewaking. AI Auditor - independent audit of governance and systems. Vendors in the Cognition Plane. Vendors in the Control Plane. Vendors in the Data Plane. Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 The 12 knowledge areas as capabilities, grouped in three layers: Strategy & Direction, Realisation & Operations, Assurance & Accountability. Per KA the associated business function, primary role and goal. Source: AI-BOK v1.0, Module 3, sectie 3.6, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6, p.159 Layered view of all architecture layers: motivation (drivers, goals, principles), strategy (capabilities), business (functions, roles, processes), application (ABBs, services) and technology (SBBs). Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Overview of the AI-BOK: from drivers and goals via 12 knowledge areas and 3 architecture planes to value creation and governance. Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 The 19 AI-BOK roles with their reporting lines and mapping to knowledge areas. Source: AI-BOK v1.0, Module 1, sectie 4, p.26 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 4, p.26 The 8 lifecycle phases of an AI initiative in TOGAF ADM style with feedback loops (retraining, evaluation, evolution). Source: AI-BOK v1.0, Module 1, sectie 4, p.26 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 4, p.26 Detailed process model with quality gates as formal decision points between lifecycle phases. Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 RACI matrix documenting the responsibilities of roles per knowledge area. Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 All 12 business functions with their 52 business objects: contracts, representations, assessments and registers. Source: AI-BOK v1.0, Module 3, sectie 3.6.5a, p.163 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.5a, p.163 Business Functions mapped to the application building blocks (ABBs) that support them. Source: AI-BOK v1.0, Module 3, sectie 3.5, p.151 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.5, p.151 AI value stream from Idea & Opportunity to measurable Value, with per phase the supporting capability, process, function and ABB. Source: AI-BOK v1.0, Module 3, sectie 3.3.4, p.140 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.3.4, p.140 Three Lines of Defence model: operational management, risk & compliance, and internal audit for AI. Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 2, sectie 2.2, p.88 Elaboration of the 19 main roles into specialised sub-roles per knowledge area. Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Business objects grouped per knowledge area: per business function the associated documents, registers, assessments and protocols. Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 The three architecture planes (Cognition, Control, Data) with their application building blocks and the services they provide to each other. Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Cognition Plane SBBs: LLM-providers, agent orchestrators, RAG frameworks, guardrails, evaluatie en explainability tools. Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Control Plane SBBs: IAM, policy engines, audit trails, model registries, API gateways, monitoring en CI/CD. Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Data Plane SBBs: vector databases, knowledge graphs, data platforms, feature stores, documentopslag en datakwaliteit. Source: AI-BOK v1.0, Module 3, sectie 3.2, p.190 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.2, p.190 Meta-model of the AI-BOK: the core entities (AI System, Model, Agent, Dataset, Knowledge Source) and their interrelationships. Source: AI-BOK v1.0, Module 3, sectie 3.6.5, p.162 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.5, p.162 Data flows and cooperation relationships between application building blocks across the three planes. Source: AI-BOK v1.0, Module 1, sectie 12.6, p.73 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 12.6, p.73 Five RAG architecture variants: Naive, Advanced, Modular, GraphRAG and Agentic RAG. Source: AI-BOK v1.0, Module 1, sectie 5.10, p.33 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 5.10, p.33 The Key Vault pattern for privacy: pseudonymisation separating personal data from AI processing. Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.1, p.159 Complete vendor overview: all vendors grouped by category (LLM, MLOps, monitoring, security, etc.). Source: AI-BOK v1.0, Module 1, sectie 1.2, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1.2, p.7 Governance structure with decision-making lines between AI Board, committees and the AI Centre of Excellence. Source: AI-BOK v1.0, Module 2, sectie 2.3, p.108 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 2, sectie 2.3, p.108 Stakeholder analysis: the governance bodies (AI Board, Review Committee, Ethics Committee, Risk Committee) and their mandates. Source: AI-BOK v1.0, Module 1, sectie 1.2, 3.2, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1.2, 3.2, p.7 Detail view of the principles for KA1 (Governance) and KA3 (Architecture) with their rationale. Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1-12, p.7 Overview of all 72+ principles from the AI-BOK, grouped per knowledge area. Source: AI-BOK v1.0, Module 3, sectie 3.6.5a, p.163 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.5a, p.163 Traceability of principles via architecture requirements to the ABBs that realise them. Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Standards and constraints (ISO, IEEE, NIST, EU AI Act) with their mappings to application building blocks. Source: AI Thesaurus (AI-BOK bijlage) Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI Thesaurus (AI-BOK bijlage) Core concepts from the AI thesaurus placed in context with associated ABBs and capabilities. Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Additional standards and environmental factors that shape the AI-BOK context. Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.3, p.138 Governance integration patterns: how AI-BOK integrates with TOGAF, DAMA DMBOK, COBIT and GDPR. Source: AI Thesaurus (AI-BOK bijlage) Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI Thesaurus (AI-BOK bijlage) Complete AI thesaurus: all 52 concepts with their specialisation and association relationships. Source: AI-BOK v1.0, Module 3, sectie 3.6.6, p.163 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.6.6, p.163 The Agent Maturity Spectrum: 6 levels from reactive agents to fully autonomous AI systems. Source: AI-BOK v1.0, Module 3, sectie 3.4, p.145 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.4, p.145 The 5x5 maturity model: 5 dimensions at 5 levels, linked to the implementation roadmap. Source: AI-BOK v1.0, Module 3, sectie 3.7, p.164 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.7, p.164 AI Quality Framework with 5 quality dimensions for assessing AI systems. Source: AI-BOK v1.0, Module 3, sectie 3.5, p.151 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 3, sectie 3.5, p.151 Implementation roadmap in 4 phases (Foundation, Scale, Optimize, Transform) with identified gaps. Source: AI-BOK v1.0, Module 1, sectie 1, p.7 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 1, p.7 Detail view of knowledge area 1 (AI Governance): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 2, p.14 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 2, p.14 Detail view of knowledge area 2 (AI Strategy & Portfolio Management): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 3, p.20 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 3, p.20 Detail view of knowledge area 3 (AI Architecture): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 4, p.26 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 4, p.26 Detail view of knowledge area 4 (AI Lifecycle Management): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 5, p.28 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 5, p.28 Detail view of knowledge area 5 (Data & Semantics for AI): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 6, p.26 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 6, p.26 Detail view of knowledge area 6 (Model Management): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 7, p.30 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 7, p.30 Detail view of knowledge area 7 (AI Interaction & UX): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 8, p.26 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 8, p.26 Detail view of knowledge area 8 (AI Operations): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 9, p.30 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 9, p.30 Detail view of knowledge area 9 (AI Risk Management): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 10, p.28 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 10, p.28 Detail view of knowledge area 10 (AI Compliance & Audit): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 11, p.34 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 11, p.34 Detail view of knowledge area 11 (AI Ethics): capability, principles, business function, processes, roles, objects and supporting ABBs. Source: AI-BOK v1.0, Module 1, sectie 12, p.34 Author: Jan Willem van Veen Modified: 2026-04-02 Source: AI-BOK v1.0, Module 1, sectie 12, p.34 Detail view of knowledge area 12 (AI Knowledge & Context Management): capability, principles, business function, processes, roles, objects and supporting ABBs.