News, Tech corner - 23. March 2026

Choosing a development partner in the AI era: Why ISO 42001 matters

header_image

Most organisations buying AI development treat certifications as hygiene factors. ISO 27001 for security. ISO 9001 for process maturity. Check the boxes, move on to the technical evaluation.

AI changes what the boxes need to contain.

Traditional software is deterministic. Given the same input, it produces the same output. You can test it, version it, audit it against a known specification. AI systems are not like this. They respond non-deterministically. Their outputs shift with data, context, and user behaviour. The risks they introduce, bias, hallucination, opaque decision-making, do not map cleanly onto existing quality or security frameworks.

ISO/IEC 42001 was written for this gap. It is the first international standard for AI Management Systems, and at Hotovo, we pursued certification because we build AI-powered products for clients who operate in regulated environments. For those clients, "we follow best practices" is not a governance position. They need to see structured controls, documented risk treatment, and traceable decision-making across the full AI lifecycle.

What the certification covers

We already held ISO 9001 and ISO 27001. Adding 42001 extended our management system into four areas specific to AI:

Lifecycle governance. Design, validation, deployment, and monitoring of AI components follow defined processes with clear approval gates. This matters because AI systems do not stay static after release. Models drift. Usage patterns shift. Controls need to account for the full operational life of the system, not just the build phase.

AI-specific risk management. We identify and treat risks that do not appear in traditional security or quality assessments: training data bias, output reliability under edge cases, unintended behavioural patterns, and the compounding unpredictability of agentic workflows.

Traceability. Every AI-related design decision, risk assessment, and validation outcome is documented and attributable. When a client asks why the system behaves a certain way, there is an evidence trail.

Post-deployment evaluation. AI products are reviewed on a defined cadence after go-live. Performance, risk posture, and alignment with original design intent are reassessed against operational reality.

Bigger model is not tthe better solution. Purple background, graphics of girl and boy dancing with circles
Choosing theright model means balancing, performance, cost, scurity and compliance. Purple background
The same goes for providers. What if campaign. Purple background

Where this shows up: co-development partnerships

The clearest test of any governance framework is a complex engagement. We recently co-developed an internal AI platform with the technology leadership of a major international professional services firm. The platform serves several functions and operations across multiple jurisdictions.

What made this engagement different: the client’s CTO held an ISO 42001 Lead Implementer certification. The governance conversation was not one-directional. Both sides understood the standard at an operational level. Risk treatment decisions, data handling protocols, and monitoring requirements were negotiated between peers (not imposed by one party on the other).

The outcome was an AI system where governance was embedded in the architecture from sprint one. Not a compliance layer added before go-live. That distinction is important. When governance is structural, it survives contact with production. When it is cosmetic, it erodes the moment operational pressure builds.

What to look for in a development partner

Certifications are a signal, not a guarantee. When evaluating an AI development partner, the certification tells you the management system exists. What you need to assess is whether it operates.

Three ISO certifications together, 9001, 27001, and 42001, create a foundation that covers process quality, information security, and AI-specific governance. For clients in regulated sectors, this combination provides compliance readiness for frameworks like the EU AI Act, structured risk treatment for AI components, and documented evidence that responsible development practices are in place and auditable.

But the real test is the working relationship. Ask how they handle risk finding at sprint review. Ask where AI governance decisions are documented and who owns them. Ask what happens when a model behaves unexpectedly in production. The answers will tell you more than the certificate.

Checklist-style questions: “Is your data protected? Can you explain and review your AI decisions? Do you know your monthly AI costs?” on a purple-blue gradient background.
Checklist-style questions: “Do your models and providers fit your business? Do you know who owns the outcome?” on a purple-blue gradient background.
Text “AI without losing control. That’s how we build at Hotovo.” with an illustration of a person parachuting on a purple background.

Reach out

If you are scoping an AI-powered product or evaluating development partners for a regulated environment, we are happy to talk through how our approach works in practice. Contact our team at Hotovo.

blog author
Author
Viktor Hanko

As Co-CEO, I bring together deep technical expertise and strategic vision to drive business growth. I enjoy solving problems through smart architecture, data, and a bit of math. Outside of work, you’ll probably find me on a bike, at the gym, or just tackling something new — because I don’t sit still for long.

Read more

Contact us

Let's talk