ISO 42001 (AI Governance): A Simple, Business-Ready Playbook

ISO 42001 (AI Governance): A Simple, Business-Ready Playbook

10/17/2025 · Compliance365

ISO 42001 is the world’s first AI management standard. Think of it as the AI version of ISO 27001 — focused not on technology, but on trust, control, and responsible use.

It helps organisations answer the questions that boards, customers, and regulators now ask:

“What AI tools do we use?” “Who is responsible?” “Where does our data go?” “What could go wrong?” “How do we stay in control?”

This article explains how any organisation can become ISO 42001-ready in weeks using Microsoft 365, SharePoint, and simple workflows — without new tools or extra platforms.


1️⃣ Build a simple, living register of all AI tools

Most organisations don’t actually know:

  • What AI is being used
  • Who approved it
  • What data it touches
  • How risky it is

ISO 42001 starts by fixing that with an — your single source of truth.

Field What this means
AI tool / model name The name of the AI tool, model, or feature in use.
Business purpose What it’s used for in the business (e.g. support, summarising, triage).
Type of data involved Whether it touches customer data, internal docs, personal data, etc.
Business owner / accountable person Who is responsible for how this AI is used and kept under control.
Key risks Main things that could go wrong (e.g. data leakage, bias, wrong advice).
Controls / guardrails How those risks are managed (approvals, access limits, training, policies).
Date of last review When this AI use case was last checked and signed off.

In practice: A basic SharePoint list or Power Apps form that supports versioning, is easy for teams to update, and can be exported monthly for audit evidence.

💡 Tip: Keep the register simple. It’s a governance tool, not a technical inventory.

2️⃣ Use a clear, business-friendly AI risk process

ISO 42001 does not require complex risk mathematics or data science. It expects you to:

  • Understand where AI could cause harm or confusion
  • Decide what safeguards you will put in place
  • Review those decisions regularly

A simple approach that works across most organisations uses four risk areas:

Risk Area What This Means Example Controls
Data Exposure Could information be leaked, misused, or sent somewhere unintended? Approved data sources, access controls, data-loss prevention rules
Fairness & Bias Could the AI treat people unfairly or reinforce bias? Human review, diverse test scenarios, clear escalation rules
Accuracy & Reliability Could the AI produce misleading, incomplete, or confusing results? Validation checks, human approval, fallback to manual processes
Legal & Ethical Is this use of AI consistent with law, policy, and company values? Privacy review, acceptable use policy, clear limits on use
Outcome: A practical risk record for every AI tool — understandable by any manager and easy to review quarterly.

3️⃣ Make sure humans stay in control

ISO 42001 expects you to show where AI is allowed to act on its own and where humans must approve decisions. The principle is simple: no unsupervised AI for high-impact decisions.

For each AI use case, decide:

  • Can this run fully automated?
  • Does a human need to review and approve the output?
  • How is that approval recorded?
Scenario Human Oversight? Reason
AI drafts internal marketing copy Optional Low impact and easily corrected
AI responds directly to customers Yes Reputation, accuracy, and tone risk
AI recommends financial or safety-related decisions Yes High business, legal, and human impact

How to track approvals: Use Microsoft Teams Approvals or Power Automate approval flows and store results in SharePoint. This automatically creates audit-ready evidence.

🧭 Goal: Clear records showing who approved what — and why.

4️⃣ Monitor AI behaviour and issues

AI governance is ongoing, not a one-off exercise. ISO 42001 expects you to monitor how AI is used and how it behaves over time.

A simple monitoring plan might include:

  • Reviewing usage logs for unusual patterns
  • Collecting user feedback on AI behaviour
  • Investigating unexpected or harmful outputs
  • Routing serious issues through your existing incident process
  • Maintaining a “kill switch” so risky tools can be switched off quickly
Useful KPIs to track:
  • Number of approved AI tools in use
  • Percentage of tools with up-to-date risk assessments
  • Number of AI-related incidents logged
  • Average time to resolve AI-related issues

5️⃣ Automate evidence using Microsoft 365

ISO 42001 does not require you to create endless manual documents. Most evidence can be generated automatically from tools you already use.

  • SharePoint: Version history shows when registers, policies, and assessments were updated.
  • Teams Approvals: Proves human oversight for key decisions.
  • Power Automate: Schedules monthly exports of s, approvals, and logs into evidence libraries.
  • Entra ID: Provides access logs and role reviews to support “least privilege”.
  • Defender for Cloud Apps: Can alert on unapproved AI tools or risky SaaS usage.
💡 Result: Over 80% of required audit evidence can be automated using your existing Microsoft 365 environment.

6️⃣ Be transparent — internally and externally

Transparency builds trust and reduces confusion for staff, customers, and regulators.

Examples of good transparency:

  • Plain-language “AI Use Statements” describing what each tool does and where it is used
  • Intranet pages that explain how AI fits into business processes
  • Short training modules on safe and responsible AI use
  • Public contact details or feedback channels for high-impact AI systems
💬 Reminder: Transparency doesn’t require exposing the code — it’s about clarity of purpose, control, and accountability.

7️⃣ A realistic 12-week roadmap to ISO 42001 readiness

You don’t need a 12-month program to get started. Many organisations can become ISO 42001-ready in around three months.

Example roadmap:
  • Weeks 1–2: Define AI policy, roles, and an AI governance group.
  • Weeks 3–6: Build the AI Register and introduce a simple AI risk assessment process.
  • Weeks 7–10: Implement human oversight, monitoring, and automated evidence capture.
  • Weeks 11–12: Conduct a readiness review, close gaps, and finalise key AIMS documentation.

Most of the work involves organising information, clarifying responsibilities, and using existing tools more effectively — not buying new platforms.


Why this matters

  • Board confidence: Demonstrates that AI is being managed, not left to chance.
  • Customer trust: Helps you answer AI-related questions in RFPs and due diligence.
  • Regulatory readiness: Positions you for future AI regulation and government expectations.
  • Operational clarity: Reduces confusion, duplication, and AI “shadow IT”.
🎯 Bottom line: ISO 42001 isn’t a technical audit — it’s a trust framework. If your organisation uses AI today, this is the simplest way to stay in control and show responsible governance to customers, regulators, and your own team.

Next steps

If you want help standing up an AI governance framework or becoming ISO 42001-ready:

Found this useful? Get the ISO/Privacy/AI readiness checklists.

Browse resources
📞 Microsoft Teams