ISO 42001 AI governance readiness checklist (free)

Use this free ISO 42001 readiness checklist to assess how mature your AI Management System (AIMS) is across governance, risk, design, monitoring and supplier controls. Quickly see how ready your organisation is for ISO 42001 certification and where to focus next.

Objective

Gauge ISO 42001 AIMS maturity and identify high-value improvements in your AI governance.

Scoring

Yes / Partial / No. Progress and readiness update automatically.

Output

Download a branded PDF with domain breakdown and next-step guidance.

ISO 27001 ISO 27701 ISO 42001 Essential Eight SOC 2 DISP / ISM / IRAP

AIMS Governance

Scope, roles/committees, policy set, objectives.

0/0 answered
Clearly define where AI is used, what is covered by governance, and what is explicitly out of scope.
Show examples
  • A documented scope listing AI use cases (e.g. clinical decision support, chatbots, analytics).
  • Systems and models in scope (vendor AI, internally developed models, GenAI tools).
  • Geographic and regulatory boundaries (e.g. Australia-only deployment; GDPR-relevant use cases).
Demonstrate clear accountability for AI decisions, oversight, and escalation.
Show examples
  • Named AIMS owner, AI risk owner, and model/use-case owners.
  • Defined review bodies (e.g. AI governance committee, ethics review panel).
  • RACI covering design, approval, monitoring, incident handling and retirement.
Policies should guide safe, ethical and compliant AI use across the organisation.
Show examples
  • AI Acceptable Use Policy covering staff and contractor use of AI tools.
  • AI Risk Management or AI Governance Policy aligned to ISO 42001.
  • Transparency and disclosure policy for customers or end users.
Track whether AI is delivering value safely and within defined risk tolerance.
Show examples
  • Defined KPIs such as model accuracy, false positives, bias indicators or incident rates.
  • Targets and thresholds approved by leadership.
  • Periodic review of AI performance and risk metrics.

Risk & Impact Assessment

Inventory, AIRA method, data ethics & provenance.

0/0 answered
Maintain a single source of truth for all AI systems and use cases.
Show examples
  • Inventory of AI use cases with purpose, business owner and technical owner.
  • Associated datasets, model types, and deployment environments.
  • Initial risk classification (low/medium/high impact).
Use a consistent method to assess AI risks and impacts before deployment and when changes occur.
Show examples
  • Documented AI Impact/Risk Assessment (AIRA) template.
  • Assessment criteria covering safety, bias, legal, ethical and reputational risks.
  • Completed AIRA records for each material AI use case.
Ensure training and input data is appropriate, lawful and ethically sourced.
Show examples
  • Bias and representativeness assessment for training datasets.
  • Documented data provenance and sourcing decisions.
  • Confirmation of consent, licensing or lawful basis for data use.

Design, Controls & Oversight

Guardrails, human-in-the-loop, evaluation/testing.

0/0 answered
Build preventative controls into AI systems to reduce harm and misuse.
Show examples
  • Prompt filtering, output moderation, or content safety controls.
  • PII detection, masking or minimisation in prompts and outputs.
  • Security controls protecting models, APIs and inference endpoints.
Humans must remain accountable for high-risk or impactful AI outcomes.
Show examples
  • Defined thresholds requiring human review before action.
  • Clear escalation paths for unsafe or unexpected outputs.
  • Ability to disable, pause or override AI systems when required.
Test AI systems before release and after significant changes.
Show examples
  • Pre-deployment testing for accuracy, bias and robustness.
  • Adversarial or red-teaming exercises for misuse scenarios.
  • Documented test results and go/no-go decisions.

Operations & Monitoring

Monitoring, incident handling, records & logs.

0/0 answered
Monitor AI behaviour in production to detect degradation or misuse.
Show examples
  • Monitoring for model drift, performance drops or abnormal outputs.
  • Logging of prompts, outputs and key decisions (within privacy limits).
  • Alerts for misuse, abuse patterns or policy violations.
Treat AI-related issues as formal incidents with learning outcomes.
Show examples
  • AI incident categories included in the incident response plan.
  • Documented handling of bias, hallucinations or harmful outputs.
  • Post-incident reviews with corrective actions tracked.
Maintain traceability for how AI systems were designed and operated.
Show examples
  • Version history of models, prompts, configurations and datasets.
  • Decision logs for approvals, changes and risk acceptances.
  • Retention aligned to regulatory and audit requirements.

Suppliers & Transparency

Third-party AI due diligence and user transparency.

0/0 answered
Manage risk from external AI providers and embedded models.
Show examples
  • Vendor due diligence covering security, privacy, bias and transparency.
  • Review of model cards, SOC/ISO reports or assurance statements.
  • Contractual controls: usage limits, data handling, breach notification.
Users should understand when AI is used and how to seek help or opt out.
Show examples
  • User-facing disclosures explaining AI purpose and limitations.
  • Clear contact points for human review or complaints.
  • Documented opt-out or alternative pathways where feasible.

Evidence & Improvement

Evidence management, audits and reviews.

0/0 answered
Evidence should be centralised, controlled and audit-ready.
Show examples
  • AIMS evidence library in SharePoint with version control.
  • Logs and monitoring evidence retained in Sentinel or equivalent.
  • Purview labels or retention policies applied to AI records.
Regular assurance ensures AI governance controls are operating as intended.
Show examples
  • Internal AI governance or AIMS audits aligned to ISO 42001.
  • Findings logged with corrective actions, owners and due dates.
  • Evidence of verification and closure.
Leadership should regularly review AI risks, outcomes and improvement opportunities.
Show examples
  • Management review minutes covering AI risks, incidents and KPIs.
  • Decisions on risk acceptance, resourcing or policy changes.
  • Tracked actions arising from AIMS management reviews.
0%
Not started

Answer the questions to see your readiness.

📞 Microsoft Teams