Skip to content

Tools & Frameworks

A curated list of practical tools, frameworks, and resources to help Australian businesses implement AI safely and responsibly.

Scope: Non-commercial resources only (government, standards bodies, nonprofits, and open-source). How to use: Start with frameworks, set governance, then implement technical controls and monitoring.

Last updated: 2025-08-16 Β· Owner: SafeAI Aus


1) AI Risk & Ethics Frameworks

  • Australian Government AI Ethics Principles – 8 principles guiding ethical AI use. (industry.gov.au)
  • Voluntary AI Safety Standard (10 Guardrails) – published 2024, aligns with ISO/IEC 42001 and NIST AI RMF. (industry.gov.au)
  • Note: The Guardrails explicitly align with ISO/IEC 42001:2023 and the NIST AI Risk Management Framework 1.0.
  • National framework for the assurance of AI in government (DTA) – how agencies assure AI systems. (dta.gov.au)
  • NIST AI Risk Management Framework (AI RMF 1.0) – comprehensive, sector-agnostic guidance. (nist.gov)
  • NIST Generative AI Risk Management Profile – profile for GenAI use cases. (nist.gov)
  • ISO/IEC 23894 – AI risk management guidance. (iso.org)
  • ISO/IEC 42001 – AI management system (AIMS) requirements. (iso.org)
  • OECD AI Principles – intergovernmental principles for trustworthy AI. (oecd.ai)
  • Singapore Model AI Governance Framework – practical implementation guidance. (pdpc.gov.sg)

2) Governance & Policy Tools

  • Privacy Impact Assessments (PIAs) – OAIC guidance on conducting PIAs. (oaic.gov.au)
  • NSW Artificial Intelligence Assessment Framework – NSW's method for assessing AI; assurance is now embedded in the Digital Assurance Framework. (digital.nsw.gov.au)
  • ASD Essential Eight – baseline mitigation strategies. (cyber.gov.au)
  • Notifiable Data Breaches (NDB) Scheme – reporting obligations. (oaic.gov.au)
  • Australian Privacy Principles (APPs) – core privacy obligations. (oaic.gov.au)

3) Technical Testing & Monitoring

  • Model Cards – documentation standard for AI models. (arXiv)
  • Datasheets for Datasets – dataset transparency and quality control. (arXiv)
  • Aequitas – open-source bias/fairness audit toolkit. (github.com)
  • Fairlearn – open-source fairness assessment and mitigation. (fairlearn.org)
  • NIST AI RMF Playbook (TEVV) – testing, evaluation, verification, and validation resources. (airc.nist.gov)

4) Privacy & Security

5) Explainability & Transparency

  • LIME – local interpretable model-agnostic explanations. (github.com)
  • SHAP – Shapley value–based feature importance. (github.com)
  • DALEX – model exploration and explanations. (github.com)

6) Continuous Monitoring & Ops

  • MLflow – experiment tracking and model registry. (mlflow.org)
  • Prometheus – metrics collection for model/service health. (prometheus.io)
  • Kubeflow – open-source MLOps on Kubernetes. (kubeflow.org)

7) LLM Application Safety & Secure Development

  • OWASP Top 10 for LLM Applications – common risks and mitigations. (owasp.org)
  • OWASP AI Security & Privacy Guide – secure AI development guidance. (owasp.org)
  • MITRE ATLAS – adversary tactics/techniques/mitigations for ML systems. (atlas.mitre.org)
  • Guidelines for Secure AI System Development – joint guidance (UK NCSC, CISA and partners). (ncsc.gov.uk)

8) RAG Evaluation & QA (Open-source)

  • ragas – evaluation for retrieval-augmented generation. (github.com)

9) Serving & Data Infrastructure (Open-source)

10) Procurement & Vendor Risk (Checklist)

  • Data: classification, residency, retention, cross-border flows, deletion lifecycle.
  • Privacy: APPs alignment, DPIAs/PIAs, purpose limitation, de-identification controls.
  • Security: ASD Essential Eight maturity, vulnerability management, incident response, logging.
  • Governance: model documentation (Model Card), dataset documentation (Datasheet), access controls.
  • Evaluation: fairness, robustness, performance evidence; TEVV plan and metrics.
  • Compliance: ISO/IEC 42001 readiness; ISO/IEC 27001 controls. (iso.org)
  • Contracts: DPAs, IP/licensing, data ownership, third-party subprocessor transparency, exit plan.

πŸ“š Further Reading