The New Zealand Government has introduced a new framework to guide the responsible use of Artificial Intelligence (AI) technologies across the public sector (Framework). While not legally binding, the Framework sets out best practice principles for AI adoption. Its vision is the responsible adoption of AI 'to modernise public services and deliver better outcomes for all New Zealanders.'
In June 2024, Cabinet adopted the OECD's AI Principles to guide the development of trustworthy, innovative, and democratic AI in New Zealand, aligning with other OECD member states. The OECD AI Principles were used to inform the principles of this Framework.
How does this align with local and global developments on AI regulation?
The principles-based, non-binding Framework is consistent with the Government's desire to avoid AI-specific legislative reform. Of particular relevance is the “light-touch, proportionate and risk-based approach to AI regulation” proposed by the Minister of Science, Innovation and Technology – Hon Judith Collins KC in a June 2024 Cabinet paper (we cover this in more detail in our previous article).
The approach proposed in the paper is to leverage existing laws as guardrails and only introduce new regulation to “unlock innovation or address acute risks”. The introduction of non-binding guidance, such as the new Framework, aligns with this approach – and the Minister, in a recent speech to the public sector, reaffirmed the Government’s commitment to a pragmatic AI regulatory approach.
This is in stark contrast to developments in the EU, where the groundbreaking AI Act imposes obligations (with the potential for eye-watering penalties for non-compliance) focussed on preventing harm to the health, safety, and fundamental rights of individuals. To accomplish this goal, the EU AI Act creates a risk-based framework that establishes obligations on individuals and organisations that are dependent on their role in the AI value chain, the risk of the technology involved, and the risk that arises from the context of the AI system’s use. This regulatory framework is complex and we have unpacked and analysed its implications in a series of articles.
Notwithstanding this divergence in philosophical approach to AI regulation (or non-regulation), the aims and principles of the Framework do broadly align with developments in AI regulation (whether binding or 'soft law') around the world.
Principles of the Framework
The Framework is guided by five AI principles:
- Inclusive, sustainable development – Public Service AI systems should contribute to inclusive growth, sustainable development and the reduction of economic, social, gender and other inequalities, including by reference to access to technology.
- Human-centred values – Public Service AI should respect the rule of law, democratic values, human and labour rights, including personal data protection and privacy, ensuring ethical appropriate use.
- Transparency and explainability – Those using, or interacting with, Public Service AI should be aware of, and understand, how the Public Service is using that AI. Public Service agencies should therefore disclose when AI is used, how those systems were developed and how they affect outcomes.
- Security and safety – The security of customers and staff is a core business requirement. Public Service AI should apply a robust risk management approach and ensure the traceability of data.
- Accountability – Public Service AI should be subject to oversight. Capability should therefore keep up with technological changes, including to relevant regulatory and governance frameworks.
Pillars of the Framework
The Government Chief Digital Officer (GCDO) is leading a Public Service AI work programme to support the implementation of the Framework’s vision while working closely with MBIE to compile a cross-portfolio policy work programme and national AI strategy. The programme is guided by six pillars:
- Governance – supporting transparency and human accountability in Public Service AI use.
- Guardrails – enabling safe and responsible Public Service AI use.
- Capability – building internal and external AI knowledge and skills.
- Innovation – providing pathways that enable safe AI testing and innovation.
- Social licence – ensuring New Zealanders have trust and confidence in Public Service AI use.
- Global voice – ensuring international counterparts see New Zealand as a trusted AI partner.
The Framework aims to facilitate the responsible adoption of AI within public service agencies as a foundation for driving broader productivity and economic growth across New Zealand. The human-centric approach of the Framework will ensure citizens, taxpayers and Public Service workers are at the forefront of the design and implementation of AI, while emphasising equitable outcomes.
Guidance on the use of Generative AI by the Public Service is expected in due course and the GCDO will continue to develop the Public Service AI programme. DLA Piper will continue to monitor any further developments in this space.
DLA Piper is committed to providing world-leading insights and advice when it comes to Artificial Intelligence, ensure to stay informed by visiting our international AI in Focus page.