Responsible AI: 2025 from principle to practice

Blog Details Thumb

There has been much conversation about Responsible AI, what it is, how do you determine responsibility and most importantly why does responsibility even matter.

 How to answer these questions have been swirling around all sides of the globe and in 2025 are very much about to coalesce.

At its core responsibility is being accountable for one’s actions, fulfilling obligations, and reliably meeting expectations. An unspoken contract that one commits to, protecting the trust that others place in them.

In short, words change meaning when the systems around them change. "Responsibility" is one of them.

2025 marks the year that three regulatory instruments come into effect, each set to shift how both organisations and agencies understand the word “responsibility” at large but also how they transition the conversations they’re having around Responsible AI into practice.

These frameworks, while developed separately, closely align in their values. They reshape how organisations govern technology and define how AI should be designed, deployed, and maintained.

At DPTR, we see this moment as a shift in what it means to deliver. Responsible AI is a design challenge and one that shows up in how systems behave, how decisions are made, and how accountability is embedded and ultimately how that interconnectedness impacts the clarity of decisions being made across the lifecycle of an organisations data and technology.

In practical terms, three frameworks are shaping this shift:

A Structural Shift

  • EU AI Act
    Enacted in May 2024, with key obligations from February 2025. This Act enforces a tiered approach to risk and sets out specific obligations by category.
  • ISO/IEC 42001
    A formal standard for AI governance published in late 2023. Certification schemes are now available through national accreditation bodies.
  • VAISS (Australia)
    Australia’s Voluntary AI Safety Standard, now surfacing in tenders and flagged for mandatory implementation in high-impact areas such as health, education, and finance.

Each framework asks the same core question: not if AI is in use, but if its use can be trusted.

Together, they create a single shared landscape that directly addresses the theme of ‘responsibility’.

But for those of us who aren’t compliance experts, what do these three frameworks actually mean in practice and how do I make sure I not only understand them, but also know what obligations my organisation needs to meet?

Understanding the Frameworks

(1) EU AI Act

The EU AI Act classifies AI into four categories and applies based on risk. Systems are categorised into four tiers: unacceptable, high, limited, and minimal. Obligations increase in line with potential harm, and they are starting to be enforced right now with more complex requirements landing in 2026.

Tier Examples Requirements
Tier
Unacceptable
Examples
Social scoring, emotion detection in employment
Requirements
Prohibited from February 2025
Tier
High
Examples
Credit scoring, medical triage, public infrastructure
Requirements
Requires rights assessments, documentation, and human oversight
Tier
Limited
Examples
Chatbots, AI-generated media
Requirements
Requires user disclosure
Tier
Minimal
Examples
Filters, standard automation tools
Requirements
No specific obligations

In practice this could look like:

Unacceptable Risk:

Emotion Recognition in Hiring: Creating or leveraging AI tools that claim to assess a candidate's emotional state or personality during video interviews to support in hiring decisions.

High Risk:

AI Credit Scoring Systems: Creating or leveraging AI systems that are used by banks or fintech companies to assess loan eligibility, particularly when used without transparency.

Limited Risk:

AI Chatbots: Using or leveraging AI customer service bots that simulate human interaction but don’t make consequential decisions.

Minimal Risk:

Automation Tools in Word Processing: Using AI tools to autocomplete undertake grammar correction, or formatting suggestions.

And from August 2025, general-purpose models must include training transparency and user documentation. Full high-risk requirements follow in 2026, but product and procurement timelines require action well in advance.

The design implication is clear. Products that integrate general-purpose models even those through vendor APIs will need to disclose how those systems were trained, what data was used, and what limitations exist. If you wait until 2026 it too late, you’ll be redesigning what’s already in use.

The second framework part of this coalescence and for those already across compliance frameworks like ISO 27001 or ISO 9001, is ISO/IEC 42001 and it will feel familiar but focused on the unique challenges of AI.

It introduces the AI Management System (AIMS), a governance structure for organisations that use, build, or integrate AI. It works with existing ISO frameworks and includes:

It will offer a clear and auditable path for demonstrating responsible AI governance.

(2) ISO/IEC 42001

This standard sets out a framework for managing AI systems within existing governance structures and includes:

  • Strategic alignment of AI use with organisational objectives
  • Defined responsibilities across the AI lifecycle
  • Governance over data sourcing, model development, validation and retirement
  • Incident management and ongoing improvement cycles

In practice what this actually means is having clearly documented roles for who signs off on training data, how model risks are evaluated before deployment, and how incidents are logged, escalated, and addressed. In short ISO/IEC 42001 will complement existing compliance certifications but with a direct focus on AI.

The third framework exists to operationalise Australia’s national ethical principles, VAISS outlines ten practical guardrails that help organisations embed safety, transparency, and accountability into AI deployment especially in in high-risk domains.

Having been designed for implementation it includes clear expectations such as documenting the full AI supply chain and model lineage, ensuring mechanisms for human oversight in automated decision-making, and treating system faults and near-misses as critical learning events logged through structured improvement loops.

(3) VAISS (Australia)

The Voluntary AI Safety Standard translates national ethical principles into ten defined guard-rails. These include items like:

  • Documenting AI supply chains and model lineage
  • Ensuring paths for human oversight in automated decision-making
  • Managing faults and near-misses through logged improvement loops

As it currently stands VAISS is already in use across several sectors, and under review for mandatory application in high-impact environments it’s starting to appear in procurement language.

In practice, this means organisations need to build in traceability across the AI lifecycle, assign clear accountability for oversight, and maintain auditable records of incidents, even if no immediate harm occurs. VAISS expects that risks are actively managed.

A shared understanding of responsibility

Fully coalesced these frameworks form a coordinated foundation. And as VAISS continues to mature alongside ISO/IEC 42001 and global regulation, a pattern is emerging: shared requirements and aligned expectations.

As such these frameworks naturally overlap by design. Organisations working across jurisdictions can respond once, with targeted localisation (Download the full mapping document).

Governance Element EU AI Act ISO/IEC 42001 VAISS
Governance Element
Risk Management
EU AI Act
Required for high-risk models
ISO/IEC 42001
Clause 6.1
VAISS
Guard-rail 1
Governance Element
Documentation
EU AI Act
Technical files and transparency
ISO/IEC 42001
Clause 7
VAISS
Guard-rail 3
Governance Element
Human Oversight
EU AI Act
Mandated for certain use cases
ISO/IEC 42001
Clause 8.2
VAISS
Guard-rail 4
Governance Element
Incident Response
EU AI Act
15-day reporting requirement
ISO/IEC 42001
Clause 10
VAISS
Guard-rail 9

Here are some hypothetical scenarios to help you visualise how the three frameworks interact and complement each other as well as the complexity around ensuring your organisational adherence.

Risk Management

Use Case 1 : A multinational health-tech company deploying an AI diagnostic tool across the EU and Australia.
By implementing a unified risk assessment process aligned to ISO/IEC 42001 Clause 6.1, the company can identify, score, and mitigate model-related risks early in the lifecycle. This satisfies the EU AI Act's requirements for high-risk systems and meets VAISS Guard-rail 1, which expects proactive risk identification and mitigation tailored to Australian use contexts.

Documentation

Use Case 2: A financial services provider rolling out an AI-based credit scoring engine.
The organisation maintains comprehensive technical documentation, including model objectives, training data summaries, testing protocols, and limitations aligned with ISO/IEC 42001 Clause 7. This documentation supports both EU AI Act transparency obligations and VAISS Guard-rail 3, which requires documented model lineage and supply chain traceability, enabling reuse across jurisdictions with minimal adaptation.

Human Oversight

Use Case 3 : A government agency using AI for triaging service requests.
To align with ISO/IEC 42001 Clause 8.2, a clear escalation path is established for when AI decisions reach confidence thresholds or predefined risk triggers. This satisfies the EU AI Act’s requirement for human-in-the-loop oversight in high-impact use cases and VAISS Guard-rail 4, which mandates visible, accessible human override mechanisms for automated systems.

These use cases show how a shared framework approach streamlines compliance across jurisdictions whilst reducing duplication and meeting distinct obligations.

For a deeper dive into how these governance elements map in practice, and why expert engagement is essential for navigating this complexity, download the full resource below and understand how to build alignment from day one.

Organisational Readiness

Being ready from day one and shifting to Responsible AI requires structural clarity. DPTR supports institutions through this change by helping define the systems, roles, and artefacts needed for compliance and confidence. There are some clear steps that you need to consider taking right now.

5 essentials for implementation now:

  1. Inventory and Classification
    Map existing and planned AI systems to the relevant risk tiers.
  2. Build or Extend Governance
    Integrate AI into existing ISO-aligned structures with defined roles and artefacts.
  3. Automate Traceability
    Enable systems to generate logs and artefacts as part of normal development.
  4. Localise, Don’t Duplicate
    Create adaptable documentation that serves both EU and Australian requirements.
  5. Uplift Capability
    Appoint internal AI governance leads. Train product and delivery teams on the roles they play.

Fail to prepare then prepare to fail

This isn’t a countdown rather it’s a design opportunity.

The organisations that will navigate this shift well are not those who rush to check a box. They are the ones who understand that trust is something you focus on from the very start.

Responsible AI is now a must, and our role is to help organisations meet that expectation in ways that reflect the realities of how they work.

Together we’ll redefine ‘responsibility’

THE HUMAN WHY:

Trust must be designed because every system is experienced by someone and every experience either builds or breaks belief in that system’s fairness, reliability, and accountability.

Belief is the emotional infrastructure of trust. Without it there are no systems.

You believe in insurance because everyone else believes in insurance – if no one did, then the very idea of insurance ceases to exist

When you’re designing systems that serve humans, belief isn’t a side effect it’s the condition for function.

Download our Governance Alignment Matrix

Stay updated with our latest blogs

Blog Thumb
Embedding NIST AI RMF and ISO 23894 into Day-to-Day Delivery

Teams know how to ship. They know how to iterate. But they’ve rarely been asked to prove that the technology they’re using and systems they’re building are safe. This isn’t about security no, it’s about proving fairness, accountability, and explainability.

Read More
Blog Thumb
Responsible AI: 2025 from principle to practice

There has been much conversation about Responsible AI, what it is, how do you determine responsibility and most importantly why does responsibility even matter.

Read More
Shape