Skip to content

Voluntary AI Safety Standard (10 Guardrails)

Australia’s Voluntary AI Safety Standard provides ten practical guardrails organisations can adopt now to deploy and use AI safely and responsibly.

The standard was developed by the Department of Industry, Science and Resources (DISR) and first published in November 2023. In 2024, the guardrails were updated following consultation to align more closely with the Government’s proposed mandatory guardrails for high-risk AI applications.

The standard is voluntary and complements existing Australian law. It provides a practical framework for organisations to manage AI safely while future regulation is considered.

Importantly, the 10 guardrails are consistent with leading international standards and frameworks, including:
- ISO/IEC 42001:2023 – AI Management System Standard
- NIST AI Risk Management Framework 1.0

Why this matters

Adopting the guardrails early helps organisations build trust, resilience, and regulatory readiness. By embedding these practices now, businesses can:
- Reduce risks from bias, errors, and misuse of AI
- Strengthen transparency and customer confidence
- Position themselves ahead of future mandatory compliance requirements - Demonstrate leadership in responsible AI adoption

The 10 Guardrails

  1. Establish accountability
  2. Implement risk management
  3. Protect data
  4. Ensure transparency
  5. Enable human control
  6. Test reliability
  7. Monitor impacts
  8. Ensure accountability in the supply chain
  9. Maintain records
  10. Support human autonomy

What the guardrails do

  • Encourage transparency and accountability for AI systems.
  • Require risk assessment, testing, and human oversight before and after deployment.
  • Promote record-keeping and supplier due-diligence across the AI supply chain.
  • Emphasise stakeholder engagement and ongoing monitoring as systems evolve.

How to use this in your business

  1. Adopt the 10 guardrails as acceptance criteria for any AI initiative.
  2. Update policies and procurement to reflect supplier alignment with the guardrails.
  3. Integrate testing, documentation, and oversight into your normal change-management.
  4. Review systems at least annually or on material change.

SME-Scaled Implementation Approach

While the 10 guardrails apply to all organisations, SMEs can adopt them at different maturity levels:

Guardrail 1: Establish accountability
- Minimum: Designate an AI responsible person
- Better: Create simple AI governance policy
- Best: Regular board/leadership AI updates

Guardrail 2: Implement risk management
- Minimum: Use SAAM risk assessment tool
- Better: Quarterly risk reviews
- Best: Integrated risk management system

Guardrail 3: Protect data
- Minimum: Follow existing cybersecurity practices
- Better: AI-specific data controls
- Best: Enhanced encryption and access controls

Guardrail 4: Ensure transparency
- Minimum: "Powered by AI" labels
- Better: Explain AI role in decisions
- Best: Full algorithmic transparency

Guardrail 5: Enable human control
- Minimum: Override capability for all AI decisions
- Better: Human review of significant decisions
- Best: Human-in-the-loop for all critical processes

Guardrail 6: Test reliability
- Minimum: Pre-deployment testing
- Better: Monthly performance monitoring
- Best: Continuous testing and validation

Guardrail 7: Monitor impacts
- Minimum: Track errors and complaints
- Better: Proactive impact assessment
- Best: Real-time monitoring dashboard

Guardrail 8: Ensure accountability in supply chain
- Minimum: Vendor compliance check
- Better: Contractual AI requirements
- Best: Regular vendor audits

Guardrail 9: Maintain records
- Minimum: Keep AI decision logs
- Better: Comprehensive documentation
- Best: Automated compliance reporting

Guardrail 10: Support human autonomy
- Minimum: Opt-out options
- Better: User control preferences
- Best: Full user agency over AI interactions


Summary Table

Guardrail Minimum Requirement Better Practice Best Practice
1. Establish accountability Designate responsible person Simple governance policy Regular board/leadership AI updates
2. Risk management Use SAAM risk tool Quarterly reviews Integrated risk management system
3. Protect data Follow cybersecurity basics AI-specific controls Enhanced encryption & access controls
4. Transparency “Powered by AI” labels Explain role in decisions Full algorithmic transparency
5. Human control Override capability Human review of major decisions Human-in-the-loop for critical processes
6. Reliability testing Pre-deployment testing Monthly monitoring Continuous testing & validation
7. Monitor impacts Track errors & complaints Proactive assessments Real-time monitoring dashboards
8. Supply chain accountability Vendor compliance check Contractual AI requirements Regular vendor audits
9. Maintain records Keep decision logs Comprehensive documentation Automated compliance reporting
10. Human autonomy Opt-out options User control preferences Full user agency over interactions

When guardrails could become mandatory

Disclaimer: As of August 2025, it remains uncertain if, when, and how the Voluntary AI Safety Standard will become mandatory. This is subject to an ongoing political and legislative process in Australia, and requirements may change.

Government's principles-based definition of "high-risk AI"

In its consultation on mandatory guardrails for AI in high-risk settings, the Australian Government proposes a principles-based approach.
An AI application may be considered high-risk if it has a high likelihood of causing material harm in one or more of the following areas: - Human rights or freedoms
- Health and safety
- Legal rights or obligations
- Democratic processes
- Environmental outcomes
- Broader societal impacts

Risk is assessed based on context, severity, and scale, rather than a predetermined list of application types.
Source: Australian Human Rights Commission – Mandatory Guardrails for AI in High-Risk Settings (PDF)

Illustrative examples from public commentary

The following examples are not part of the official government definition. They are drawn from media, legal analyses, and the 2024 Senate inquiry into AI, which recommended that certain AI systems and general-purpose models be treated as high risk.
Senate Inquiry report: “Australian Government should regulate generative AI” – The Guardian, 27 Nov 2024

Examples often cited as potentially high-risk in public discussions:
- Healthcare diagnosis or treatment
- Employment decisions (hiring, firing, promotion)
- Financial services (loans, insurance)
- Government service delivery
- Critical infrastructure
- Legal decisions

General-purpose applications (likely to remain voluntary under current proposals):
- Marketing automation
- Customer service chatbots
- Internal productivity tools
- Content generation
- Basic data analysis


Further Reading & Official Resources