clearlaw

The proposed mandatory guardrails on AI: Australia's shifting approach to regulating AI

The Australian Government has made clear its view that there is immense potential for AI to improve social and economic well-being while at the same time recognising, and taking steps to address, the risk that AI poses across a number of settings.

In September 2024, the Australian Government released its proposal paper on safe and responsible AI (AI Proposal Paper) which sets out the potential risks and harms from AI, how it intends to build public trust and its proposed mandatory guardrails against ‘high-risk’ AI to better regulate AI systems. Consultation for this proposal paper closed on 4 October 2024 and feedback has already been published.

This article provides an overview of what the AI Proposal Paper regards as ‘high-risk’ settings for AI and why they are considered high-risk, the proposed mandatory guardrails against high-risk AI and the Voluntary AI Safety Standards (Voluntary Standards) accompanying the AI Proposal Paper.

Nick Worth and Tarah Bower, Maddocks

AI and the significance of the AI Proposal Paper

The Australian Government have observed a number of issues in the AI market. In some settings AI technology can pose a significant risk to people and the community if it is used incorrectly in a high-stakes context (such as in a healthcare or environmental setting as examples). There is also generally low public trust in AI. This is a likely consequence of society’s perceived risks of AI and what is seen as a regulatory system currently incapable of countering its shortcomings. Both of these factors are slowing the speed with which AI has been adopted and limit its effective use and potentially positive impact.

The Australian Government’s consultations on safe and responsible AI in 2023 acknowledged that the regulatory system is not fit for purpose to address the risks of AI in Australia. Ultimately the purpose of the AI Proposal Paper is to create a more robust system which will prevent the misuse of AI and therefore its potentially harmful consequences to both consumers and businesses.

Until mandatory requirements proposed in the AI Proposal Paper come into effect, the Voluntary Standards offer immediate guidance to organisations on the responsible development and application of AI systems.

Will the guardrails be relevant to you?

The AI Proposal Paper considers that multiple types of organisations may be responsible for applying different aspects of the guardrails (which are described below) for particular AI systems, with the roles and obligations of organisations to be refined as the regulatory measures evolve.

It is proposed that the responsibility in implementing the guardrails will be borne by ‘developers’ and ‘deployers’ of AI, with those roles defined as follows:

  • Developer: Organisations or individuals who design, build, train, adapt or combine AI models and applications.
  • Deployer: Any individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be used for internal purposes or used externally impacting others, such as customers or individuals, who are not deployers of the system.

The guardrails do not specifically apply to the intended consumer of an AI product or service (or those who interact with, or are impacted by, AI) although those individuals and organisations would need be conscious of whether their use complies with existing legal obligations. Both developers and deployers would need to consider who their audience is (the ‘end-user’) including how those end-users may use or misuse the AI system.

A risk based approach to implementing the mandatory guardrails

The AI Proposal Paper and the proposed mandatory guardrails focus on addressing ‘high-risk’ AI. Whether the use of an AI system is ‘high-risk’ will depend on a number of factors and principles. The adequacy of these principles to capture whether use of AI is ‘high-risk’ will continue to be refined based on feedback received on the AI Proposal Paper.

What is considered ‘high-risk’ AI?

High-risk AI is broadly categorised for the purposes of the mandatory guardrails as follows:

AI systems where risks are known or can be foreseen; and highly advanced General Purpose AI Models (GPAI) which have the ability to develop and grow into unintended areas and therefore risks cannot be foreseen.

To determine whether the use of AI in a business would be considered high-risk and therefore subject to the mandatory guardrails, the following risks should be considered (as a whole):

  1. The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations.

  2. The risk of adverse impacts to an individual’s physical or mental health or safety.

  3. The risk of adverse legal effects, defamation or similarly significant effects on an individual.

  4. The risk of adverse impacts to groups of individuals or collective rights of cultural groups.

  5. The risk of adverse impacts to the broader Australian economy, society, environment and rule of law.

  6. The severity and extent of those adverse impacts outlined in principles 1 to 5 above.

These principles require the developer and deployer to be conscious of the likely flow-on effects of the systems they are creating and using in order to evaluate its risk. The AI Proposal Paper includes a particularly sinister example of the application of principle (e), ‘Systemic Impacts’, where an AI system can impact the integrity of an election including by way of creating synthetic content to misinform and manipulate public opinion. These capabilities contribute to what has been observed as a lack of public trust in AI and this has been specifically expressed by the Australian Government as a key concern.

For the second category (highly advanced GPAI), the AI Proposal Paper does not provide a separate set of criteria. Instead, it proposes that organisation developing GPAI models - being those with the potential to extend beyond their original design and capabilities - will automatically be subject to the established guardrails. The importance of streamlining regulations across international jurisdictions is stressed given it would make compliance easier for organisations operating across borders.

The proposed mandatory guardrails

The following are the proposed 10 mandatory guardrails to regulate organisations developing or deploying high-risk AI systems:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance

  2. Establish and implement a risk management process to identify and mitigate risks

  3. Protect AI systems, and implement data governance measures to manage data quality and provenance

  4. Test AI models and systems to evaluate model performance and monitor the system once deployed

  5. Enable human control or intervention in an AI system to achieve meaningful human oversight

  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content

  7. Establish processes for people impacted by AI systems to challenge use or outcomes

  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks

  9. Keep and maintain records to allow third parties to assess compliance with guardrails

  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

More detailed explanations behind each of the mandatory guardrails are provided in the AI Proposal Paper. However, the overarching feature is that implementing the guardrails in high-risk settings reduce and entirely prevent the likelihood of harm occurring.

How will Australia introduce the mandatory guardrails and regulate AI?

The AI Proposal Paper considers that the ‘preventative’ nature of the guardrails is best placed to address the consequences of high-risk AI. A preventative approach refers to the ability to halt catastrophic harm before it occurs and is preferred over remedial measures which instead focus on post-market liability.

The three regulatory options the Government has proposed are:

  • Domain-Specific Approach: Integrate AI guardrails into existing sector-specific regulatory frameworks. This would involve adapting current laws based on the specific needs of different industries.

  • Framework Approach: Develop new legislation that provides a general framework for AI, which would be integrated into existing regulatory structures. This approach focuses on economy-wide definitions and thresholds but relies on current regulatory bodies for enforcement.

  • Whole-Economy Approach: Create a comprehensive AI-specific Act that includes all necessary guardrails, definitions, and enforcement mechanisms in one piece of legislation, specifically designed for AI oversight across all sectors.

The AI Proposal Paper delves into detail on each of the options and outlines some of the actions taken by other countries. The intent is for the measures to balance flexibility, consistency and enforceability to address unique risks of AI and the regulatory readiness of diverse industries. The proposed options also aim to be parallel with international practices and particularly draw on the European Union and Canada models.

Voluntary AI Safety Standards

Accompanying the AI Proposal Paper are the Voluntary Standards which provide immediate guidance on how best to navigate the development and use of AI in organisations.

Largely replicating the proposed mandatory guidelines the 10 Voluntary Standards seek to provide organisations currently suppling AI related services with certainty prior to the mandatory standards rolling out nationwide. Until any mandatory guardrails are implemented, an organisation deploying or developing AI should closely consider and follow the Voluntary Standards so it can ensure a smooth transition no matter how and when regulatory measures are implemented.

The combination of the Voluntary AI Safety Standards alongside the proposed mandatory guardrails work to refine best practice approaches to the development and deployment of safe AI systems and prompt early engagement.

What the outcome of the AI Proposal Paper means for the AI landscape in Australia

As the capabilities of AI systems grow, so does the urgency for comprehensive governmental regulations. The largely unchecked AI industry poses significant risks, including issues related to privacy, bias, security, and the potential for misuse. For instance, AI systems trained on biased data may reinforce or even worsen societal inequalities, while autonomous systems could lead to unintended consequences in critical areas such as healthcare or transportation.

Despite the identifiable (and not yet identifiable) risks it is clear the Australian Government, like many other governments around the world, would like to maximise the potential of AI and see an even broader application of AI across the economy as a whole, but only to the extent it is used and managed safely and responsibly.

The AI Proposal Paper can be viewed as a blueprint for the way AI will be looked at by regulators going forward and provides both clarity to an arguably messy AI market, and comfort to those who use or are affected by the use of AI. It also signals a major crackdown on how AI systems will be developed and deployed in various settings.

As we edge towards an AI-powered future, it is important to ensure that AI systems are clever enough to serve the community, but not quite clever enough to lobby for their own loopholes.

More information from Maddocks

For more information, contact Maddocks on (03) 9258 3555 and ask to speak to a member of the Commercial team

More Cleardocs information on related topics

Order related document packages

 

Lawyer in Profile

Jack Coventry
Jack Coventry
Senior Associate
+61 3 9258 3819
jack.coventry@maddocks.com.au

Qualifications: BA (Philosophy), Monash University, JD (Juris Doctor), University of Melbourne

Jack is a member of Maddocks Commercial team. He advises a range of corporate and private clients on:

  • M&A transactions,
  • corporate reorganisations, and
  • legal and tax structuring.

Jack acts for clients on both buy-side and sell-side and specialises in founder-owned businesses and Australian subsidiaries of multi-national companies. He works across a number of sectors including information technology, professional services, and property development and management including land lease.

Jack's structuring work includes assisting multinationals to structure Australian operations, listed companies to achieve regulatory compliance / optimisation and providing general tax structuring. He has also represented clients in tax controversies including before the General Anti-Avoidance Review Panel (GAAR Panel) and the Federal Court of Australia.

Read Our Latest Articles