ISO/IEC 42001:2023 is the first international standard dedicated to AI management systems. Published in December 2023, it gives organizations a certifiable framework for governing the development, deployment, and use of AI systems. If your organization builds AI products, integrates AI into its services, or deploys AI internally, this standard applies to you.
The timing matters. Regulatory pressure around AI governance is accelerating globally. The EU AI Act requires risk management systems for high-risk AI. The NIST AI Risk Management Framework provides voluntary guidance in the United States. Industry-specific regulations from HIPAA to PCI DSS are being reinterpreted to cover AI systems. ISO 42001 gives you a single, auditable management system that addresses AI governance across all of these contexts.
This guide covers what the standard actually requires, how it relates to ISO 27001 (which many organizations already hold), what the certification process involves, and how to prepare without wasting months on the wrong priorities.
Key Takeaways
- ISO 42001 is the first certifiable international standard for AI management systems, published December 2023
- It follows the same Annex SL structure as ISO 27001, making integration straightforward for organizations that already hold 27001
- The standard covers the full AI lifecycle: from risk assessment and data management through deployment, monitoring, and decommissioning
- Annex A contains 39 AI-specific controls organized across governance, risk, data, and system lifecycle domains
- Certification typically takes 6 to 12 months, with organizations that have ISO 27001 able to move faster
- The biggest gaps most organizations face are in AI risk assessment methodology, data governance documentation, and AI-specific incident response
What ISO 42001 Actually Is
ISO/IEC 42001:2023 establishes requirements for an AI Management System (AIMS). The "management system" part is important. This is not a checklist of technical controls. It is a framework for how your organization governs AI as a whole: the policies, processes, roles, risk assessments, and continuous improvement mechanisms that ensure your AI systems are developed and operated responsibly.
The standard was developed by ISO/IEC JTC 1/SC 42, the subcommittee responsible for AI standards. It draws on existing management system standards (particularly ISO 27001 for information security and ISO 9001 for quality management) while adding AI-specific requirements around transparency, bias, data governance, and AI system lifecycle management.
Who Needs ISO 42001
The standard applies to organizations of any size that:
- Develop AI systems for internal use or commercial distribution
- Provide AI systems as products or services to other organizations
- Use AI systems as part of their operations, products, or decision-making processes
The Annex SL Structure
If you have worked with ISO 27001, ISO 9001, or any modern ISO management system standard, the structure of ISO 42001 will be immediately familiar. All current ISO management system standards use the Annex SL high-level structure, which defines a common set of clauses:
- Clause 4: Context of the organization
- Clause 5: Leadership
- Clause 6: Planning
- Clause 7: Support
- Clause 8: Operation
- Clause 9: Performance evaluation
- Clause 10: Improvement
How ISO 42001 Relates to ISO 27001
Most organizations exploring ISO 42001 already hold ISO 27001 certification for information security. Understanding the relationship between these two standards is critical for planning your certification project efficiently.
What They Share
Both standards follow the Annex SL structure, which means the core management system requirements are nearly identical. The clauses covering organizational context, leadership commitment, planning, support resources, performance evaluation, and improvement use the same language and expect the same types of evidence.
In practice, this means your existing ISO 27001 infrastructure carries over directly:
- Document control procedures apply to both standards
- Internal audit programs can be extended to cover AIMS requirements
- Management review meetings can address both information security and AI governance
- Corrective action processes work identically
- Competence and awareness programs need AI-specific additions but use the same training framework
Where They Diverge
The differences are in the domain-specific content. ISO 27001 focuses on information security risks and uses Annex A controls organized around information assets, access controls, cryptography, physical security, and so on. ISO 42001 focuses on AI-specific risks and uses its own Annex A controls organized around AI governance, risk management, data, and AI system lifecycle.
Key differences include:
- Risk assessment scope. ISO 27001 assesses risks to information confidentiality, integrity, and availability. ISO 42001 assesses AI-specific risks including bias, transparency, safety, accountability, and societal impact. The risk assessment methodologies differ because the risk categories differ.
- Annex A controls. ISO 27001 has 93 controls across 4 themes. ISO 42001 has 39 controls across AI-specific domains. There is minimal overlap because they address fundamentally different risk areas.
- Stakeholder considerations. ISO 42001 requires you to consider AI-specific interested parties, including individuals affected by AI decisions, regulators with AI-specific mandates, and society at large. ISO 27001 stakeholder analysis is narrower.
- Operational controls. ISO 42001 includes requirements for AI impact assessments, responsible AI practices, and AI system lifecycle management that have no equivalent in ISO 27001.
Running an Integrated Management System
The most efficient approach is to run an Integrated Management System (IMS) that covers both standards. This means a single set of policies, a unified internal audit program, combined management reviews, and shared support processes. The domain-specific controls from each standard's Annex A are implemented separately but managed within the same system.
Many certification bodies offer combined audits, where a single audit team assesses both ISO 27001 and ISO 42001 in the same engagement. This reduces audit fatigue, lowers costs, and ensures consistency between your information security and AI governance programs.
Key Requirements Breakdown
Clause 4: Context of the Organization
This clause requires you to define the scope of your AIMS and understand the internal and external factors that affect it. For AI, this means:
Understanding your AI landscape. You need a complete inventory of AI systems your organization develops, provides, or uses. For each system, document its purpose, the type of AI technology involved, the data it processes, the decisions it influences, and the stakeholders it affects.
Identifying interested parties. AI systems have a broader set of stakeholders than traditional IT systems. Beyond customers, employees, and regulators, you need to consider individuals whose data is used for training, people subject to AI-driven decisions, and communities affected by AI system outputs. Document their needs and expectations regarding your AI systems.
Defining the AIMS scope. Your scope statement must specify which AI systems, processes, and organizational units are covered. Be precise. A vague scope like "all AI activities" creates problems during the audit. Specify the AI systems by name, the business processes they support, and the organizational boundaries.
Clause 5: Leadership and Commitment
Top management must demonstrate leadership in AI governance. This is not a formality. Auditors will interview senior leaders and verify that they understand the AIMS, have allocated adequate resources, and are actively involved in governance decisions.
Specific requirements include:
- AI policy. A documented policy that establishes the organization's commitment to responsible AI, sets the direction for the AIMS, and is communicated throughout the organization.
- Roles and responsibilities. Clear assignment of AI governance responsibilities, including who is accountable for AI risk management, who oversees AI system compliance, and who handles AI-related incidents.
- Resource allocation. Demonstrated commitment of resources, both financial and human, to implement and maintain the AIMS.
Clause 6: Planning
Planning under ISO 42001 requires AI-specific risk assessment that goes beyond traditional information security risks.
AI risk assessment. You must implement a risk assessment methodology that addresses AI-specific risk categories: bias and fairness, transparency and explainability, safety and reliability, privacy, accountability, and societal impact. The methodology must identify risks, analyze their likelihood and consequences, evaluate them against acceptance criteria, and produce a risk treatment plan.
This is where many organizations struggle. Traditional risk assessment frameworks are designed around confidentiality, integrity, and availability. AI risk assessment requires evaluating dimensions like "how biased is this model's output across demographic groups" or "what happens if this model fails silently and produces plausible but incorrect results." These are fundamentally different risk categories that require different assessment techniques.
AI impact assessment. The standard requires you to assess the potential impacts of your AI systems on individuals and society. This includes evaluating effects on human rights, environmental impact, and potential for discrimination. The depth of the impact assessment should be proportional to the risk level of the AI system.
Objectives and planning. You must set measurable AI governance objectives (for example, "reduce model bias incidents by 50% year over year" or "achieve 100% documentation coverage for all production AI systems") and plan how to achieve them.
Clause 7: Support
Support covers the resources, competence, awareness, communication, and documented information needed to operate the AIMS.
Competence. People involved in AI governance must be competent. This means documented evidence of skills in AI risk assessment, AI ethics, data governance, model evaluation, and relevant regulatory requirements. Training records, certifications, and professional development logs are typical evidence.
Awareness. Everyone in the organization who interacts with AI systems, not just the AI team, must be aware of the AI policy, their role in the AIMS, and the implications of non-conformity. For a software company, this might include developers, product managers, customer support staff, and sales teams.
Documented information. ISO 42001 requires extensive documentation, including policies, procedures, risk assessments, impact assessments, model documentation, training records, audit reports, and management review minutes. If you already maintain ISO 27001 documentation, you know the drill. Apply the same document control procedures to AI-specific documents.
Clause 8: Operation
This is the largest and most technically detailed clause. It covers the actual implementation of AI risk treatment plans and the controls for AI system lifecycle management.
AI system lifecycle. The standard requires documented processes for each phase of the AI system lifecycle:
- Design and development. Requirements specification, data selection and preparation, model selection and training, testing and validation, bias evaluation, and security assessment.
- Verification and validation. Testing that the AI system meets its specified requirements, including accuracy thresholds, fairness metrics, and safety criteria.
- Deployment. Controlled release processes, including staged rollouts, monitoring during initial deployment, and rollback procedures.
- Operation and monitoring. Ongoing monitoring for model drift, performance degradation, bias emergence, and security incidents.
- Retirement and decommissioning. Processes for safely decommissioning AI systems, including data retention and disposal.
- Data quality. Processes to ensure training and operational data meets quality standards, including accuracy, completeness, representativeness, and timeliness.
- Data provenance. Documentation of data sources, collection methods, preprocessing steps, and chain of custody.
- Data bias assessment. Evaluation of training data for biases that could affect AI system outputs, with documented mitigation strategies.
- Data protection. Alignment with applicable data protection regulations, including consent management, data minimization, and purpose limitation.
Clause 9: Performance Evaluation
You must monitor, measure, analyze, and evaluate the performance of your AIMS and the AI systems within its scope.
Monitoring and measurement. Define what you will monitor (model accuracy, fairness metrics, incident rates, compliance status), how you will monitor it, when measurements will be taken, and who is responsible for analysis.
Internal audit. Conduct regular internal audits of the AIMS. The audit program must cover all requirements of the standard, including AI-specific controls from Annex A. Auditors must be competent in AI governance, not just management system auditing.
Management review. Top management must review the AIMS at planned intervals. Review inputs must include audit results, AI system performance data, risk assessment updates, stakeholder feedback, and opportunities for improvement. Review outputs must include decisions on improvement actions and resource needs.
Clause 10: Improvement
The continuous improvement clause requires you to address nonconformities through corrective actions and to identify opportunities for improving the AIMS.
When an AI system produces biased outputs, experiences a security incident, or fails to meet performance thresholds, the corrective action process must:
- Identify the root cause. Not just the immediate symptom, but the underlying management system failure that allowed it to happen.
- Implement corrections. Fix the immediate issue (retrain the model, patch the vulnerability, adjust the threshold).
- Implement corrective actions. Change the process, control, or procedure that allowed the nonconformity to occur.
- Verify effectiveness. Confirm that the corrective action prevents recurrence.
Annex A: AI-Specific Controls
Annex A of ISO 42001 contains 39 controls organized across several domains. These are the AI-specific controls that distinguish ISO 42001 from other management system standards. Your Statement of Applicability (SoA) must address each control, either implementing it or justifying its exclusion.
AI Policy and Governance Controls
These controls establish the organizational framework for AI governance:
- AI policy. A formal, documented AI policy approved by top management.
- Roles and responsibilities. Clear assignment of accountability for AI governance at every level.
- Internal AI governance structure. Committees, boards, or designated functions responsible for overseeing AI.
- AI system inventory. A maintained register of all AI systems, their risk classifications, and their governance status.
AI Risk Management Controls
Controls for identifying, assessing, and treating AI-specific risks:
- AI risk assessment process. Methodology for evaluating AI risks across bias, safety, transparency, accountability, and societal impact.
- AI impact assessment. Structured evaluation of potential consequences of AI system deployment.
- Risk treatment. Selection and implementation of controls to mitigate identified AI risks.
- Ongoing risk monitoring. Continuous assessment of AI risks as systems evolve and external conditions change.
Data Controls
Data governance is treated as a first-class concern:
- Data quality management. Processes and metrics for ensuring data meets quality requirements.
- Data provenance and lineage. Documentation of where data comes from and how it is transformed.
- Data bias management. Systematic identification and mitigation of biases in training and operational data.
- Data privacy and protection. Controls aligned with applicable privacy regulations and ethical data use principles.
- Data retention and disposal. Policies for how long AI-related data is retained and how it is securely destroyed.
AI System Lifecycle Controls
Controls governing how AI systems are built, deployed, and operated:
- Requirements specification. Documenting what the AI system must do, including functional requirements, performance thresholds, fairness criteria, and safety constraints.
- Design and development. Secure development practices for AI, including model selection, training procedures, and validation methodology.
- Testing and validation. Comprehensive testing including accuracy, resilience, fairness, and security evaluations.
- Deployment. Controlled deployment processes with monitoring and rollback capabilities.
- Monitoring. Ongoing monitoring for drift, degradation, emerging biases, and security vulnerabilities.
- Change management. Controlled processes for updating models, retraining, and modifying AI system configurations.
Transparency and Explainability Controls
- AI system transparency. Ensuring stakeholders understand when they are interacting with an AI system and what the system does.
- Explainability. Providing explanations of AI system decisions appropriate to the context and audience.
- Record-keeping. Maintaining records of AI system decisions and their basis for accountability and audit purposes.
The Certification Process
Stage 1 Audit: Documentation Review
The Stage 1 audit is a readiness assessment. The certification body reviews your AIMS documentation to determine whether you are ready for the full Stage 2 audit. They will examine:
- AIMS scope and policy. Is the scope clearly defined? Does the policy cover all required elements?
- Risk assessment and treatment. Have you completed an AI risk assessment? Is the Statement of Applicability complete?
- Documented procedures. Are processes documented for all required activities?
- Internal audit results. Have you conducted at least one internal audit?
- Management review records. Has top management reviewed the AIMS?
Stage 2 Audit: Implementation Assessment
The Stage 2 audit is the full certification audit. Auditors verify that your documented AIMS is actually implemented and operating effectively. This involves:
- Interviews. Conversations with top management, AI practitioners, data scientists, developers, and operational staff to verify awareness and competence.
- Evidence review. Examination of records, logs, reports, and artifacts that demonstrate the AIMS is functioning. This includes model documentation, risk assessment records, monitoring dashboards, incident reports, and training records.
- Process observation. Watching processes in action, such as model deployment procedures, change management workflows, or incident response execution.
- Control testing. Verifying that Annex A controls are implemented and effective.
Post-Certification: Surveillance and Recertification
After certification, the cycle continues:
- Surveillance audits. Annual audits (typically one to two days) to verify continued compliance. These are smaller in scope than the initial certification audit but cover a rotating sample of requirements.
- Recertification audit. Every three years, a full recertification audit is conducted. This is similar in scope to the original Stage 2 audit.
Timeline
A realistic timeline from decision to certification:
- Months 1 to 2: Gap assessment. Evaluate your current state against ISO 42001 requirements. Identify what exists, what needs modification, and what needs to be built from scratch.
- Months 2 to 4: Design and documentation. Write policies, define procedures, build risk assessment methodology, create templates and forms.
- Months 4 to 8: Implementation. Deploy the AIMS, conduct risk assessments, implement controls, train staff, begin monitoring.
- Months 8 to 9: Internal audit. Conduct a thorough internal audit covering all requirements.
- Month 9: Management review. Senior leadership reviews audit results and AIMS performance.
- Month 10: Stage 1 audit. Certification body reviews documentation readiness.
- Months 10 to 11: Address Stage 1 findings. Close any gaps identified.
- Month 11 to 12: Stage 2 audit. Full certification assessment.
Cost Considerations
ISO 42001 certification costs vary significantly based on several factors. Rather than quoting specific numbers that become outdated, here are the factors that drive cost:
Organization size. Larger organizations require more audit days, more documentation, and more training. Certification body fees are directly linked to audit duration, which is based on organization size and complexity.
Number of AI systems in scope. More AI systems mean more risk assessments, more documentation, more controls to implement, and more audit evidence to prepare. Scoping carefully to include only relevant AI systems can reduce costs.
Existing management system maturity. Organizations with ISO 27001 in place save significantly. The management system infrastructure, document control, internal audit program, and governance structures already exist. You are adding AI-specific content, not building from scratch.
Consulting support. Many organizations engage consultants for gap assessment, documentation development, and implementation support. Consultant rates vary widely based on expertise and geography. Some organizations handle everything internally with existing staff.
Certification body selection. Audit fees vary between certification bodies. Get quotes from multiple accredited bodies. Ensure they have auditors with AI competence, not just management system auditing experience.
Internal resource allocation. The largest cost is often the time your own staff spend on implementation. Budget for dedicated time from AI practitioners, compliance staff, and management. Half-hearted allocation of "spare time" is the most common reason certification projects stall.
Common Gaps Organizations Find During Preparation
Based on patterns across organizations preparing for ISO 42001, these are the most frequent gaps:
AI System Inventory Gaps
Many organizations do not have a complete inventory of their AI systems. Shadow AI, where teams deploy AI tools or models without central visibility, is common. Before you can govern AI systems, you need to know they exist. This includes third-party AI services, embedded AI features in SaaS tools, and experimental models running in development environments.
Risk Assessment Methodology
Organizations experienced with ISO 27001 risk assessment often struggle to adapt their methodology for AI risks. Information security risks (confidentiality, integrity, availability) are well-understood categories with established assessment techniques. AI risks (bias, fairness, transparency, safety, societal impact) require different evaluation criteria, different expertise, and different measurement approaches.
Data Governance Documentation
Many organizations have informal data practices that work in practice but are not documented to the level ISO 42001 requires. Data provenance, quality metrics, bias assessments, and lineage documentation are often incomplete or nonexistent, especially for training data that was assembled opportunistically.
Model Documentation
ISO 42001 expects comprehensive documentation for AI models in scope. This includes model cards or equivalent documentation covering intended use, limitations, training data characteristics, performance metrics, bias evaluations, and known failure modes. Many production models lack this level of documentation.
AI-Specific Incident Response
Organizations typically have incident response procedures for security incidents, but AI-specific incidents (biased outputs, hallucinations, unexpected behavior, model drift beyond acceptable thresholds) often fall outside existing incident classification and response procedures. You need documented processes for detecting, classifying, responding to, and learning from AI-specific incidents.
Transparency and Explainability
Documenting how you provide transparency to users interacting with AI systems, and how you explain AI-driven decisions when required, is a gap for many organizations. This is not just about technical explainability (SHAP values, attention maps); it includes organizational processes for communicating AI use to stakeholders.
Building on Existing ISO 27001 Certification
If your organization already holds ISO 27001, here is a practical approach to adding ISO 42001:
Step 1: Conduct a mapping exercise. Go through ISO 42001 clause by clause and map each requirement to your existing ISO 27001 documentation. Identify where existing documents satisfy the requirement, where they need extension, and where entirely new documents are needed.
Step 2: Extend your risk assessment. Add AI-specific risk categories to your existing risk assessment methodology. You do not need to start over. Add new risk criteria (bias, fairness, transparency, safety) alongside your existing CIA criteria. Conduct an AI-focused risk assessment covering all in-scope AI systems.
Step 3: Build your AI system inventory. Create a register of all AI systems, linked to your existing asset inventory. For each system, document the AI type, data sources, decision scope, stakeholders affected, and risk classification.
Step 4: Develop AI-specific policies and procedures. Write the AI policy as an extension of your information security policy. Develop procedures for AI system lifecycle management, data governance, impact assessment, and AI-specific incident response.
Step 5: Implement Annex A controls. Work through the ISO 42001 Statement of Applicability. Many controls will be new since they cover AI-specific domains. Implement each applicable control and document the implementation.
Step 6: Train your team. Extend your existing awareness and competence training to cover AI governance. This includes AI practitioners who need to understand the AIMS requirements, and non-AI staff who need to understand the AI policy and their role in governance.
Step 7: Conduct an integrated internal audit. Audit both ISO 27001 and ISO 42001 in a single program. This ensures consistency and identifies integration issues.
Step 8: Management review. Include ISO 42001 in your regular management review agenda. Review AI risk assessment results, AI system performance data, and AI-specific incidents alongside information security topics.
This approach typically compresses the certification timeline to 4 to 6 months and significantly reduces costs compared to a standalone ISO 42001 implementation.
For organizations looking to strengthen their AI governance beyond certification, our AI security audit services provide hands-on assessment of AI system vulnerabilities and compliance gaps. If you are also working toward SOC 2 compliance for your AI systems, see our guide on SOC 2 for AI Systems for practical auditor-facing preparation advice. Organizations dealing with EU regulatory requirements should also review our EU AI Act compliance guide to understand how ISO 42001 supports regulatory compliance across jurisdictions.
EU AI Act Compliance Checklist
Step-by-step requirements for the August 2026 deadline. Covers risk classification, documentation requirements, conformity assessments, and what to prioritize first.
We will send it to your inbox. No spam.
BeyondScale Security Team
AI Security Engineers
AI Security Engineers at BeyondScale Technologies, an ISO 27001 certified AI consulting firm and AWS Partner. Specializing in enterprise AI agents, multi-agent systems, and cloud architecture.
Want to know your AI security posture? Run a free Securetom scan in 60 seconds.
Start Free Scan
