The legal specialist for the Employers and Manufacturers Association says that with the change of government there will be a call for New Zealand to quickly adopt regulations around the use of artificial intelligence, by appointing an AI commissioner and putting in place an AI Act.

World leaders, tech company executives and academics have gathered in London for a summit on the safety of Frontier AI, and although NZ is not represented, the legal head of the Employers and Manufacturers Association (EMA), Paul O'Neil, said it's more important the bigger players such as the United States, the United Kingdom, the European Union and China talk about how to deal with it.

"We're not going to lead the world in AI regulation,” he said.

Positive conversations

O'Neil said the larger players are best positioned to influence the direction of regulation and development, and “the fact that they're even having a conversation about how we should approach AI is really positive".

All NZ can do as a small country is to be innovative and try new things, he said.

“We're a small, agile country; there are many opportunities where we can take a small area and be world-leading on it."

O'Neil noted other commentators have said the UK is in danger of not being at the cutting edge in terms of innovation but instead being the world leader in AI regulation, which he said is a danger to NZ.

“If you're going to be good at AI, you want to be good at maximising the opportunities you want.

“You don't want to be the person who's really good at writing all the rules. That's not the bit of AI that we should be trying to lead."

However, regulation is still necessary, he said, and the summit would cover the safety perspective at a head-of-state level. 

The EU already has AI regulations, but the US tends to take a more high-level, free-market approach, and O'Neil predicted it would be the world leader.

“My guess would be that they would take a slightly less restrictive approach than the UK and the EU.” 

He said NZ will have the benefit of being guided by the larger countries and can apply what works.

Tailor the technology to the business

According to O'Neil, AI is a huge opportunity for business, but it must be tailored.

“There's a whole bunch of opportunities and efficiencies and real gains that a business can make through using AI. AI is something to embrace and not be scared of.”

There are also some risks. It’s not a substitute for human judgement. 

“Legally, you'll still be held accountable for the decisions you make as a business," O'Neil said.

“It's not going to be a response to your legal obligations in relation to your business or your staff or your co-workers to say, 'I used AI, it made decisions on my behalf'. It helps you make decisions, but it doesn't make them for you.”

He said the nature of the information businesses put into the AI algorithm can also create risk.

“If you're putting personal information into AI to get a response, then you still have to think about your Privacy Act obligations. 

“Under the Privacy Act, when you use someone's information, you can only use it for an appropriate purpose, must store it appropriately and only hold it for as long as you need to. 

“You need to be careful that when you get a response from an AI tool, you're actually able to use that information in the way you want to use it.”

AI saves resources

AI is an opportunity to save resources on time-consuming tasks, freeing staff for more valuable work, he said, but responsible employers should train staff properly.

Intellectual property rights and copyright protections still exist. AI might change existing work enough to call it original work – but it might not.

“I don't think you can just assume that the AI tool is changing [the material] enough that you're not effectively stealing someone else's work. 

“That'd be a dangerous assumption to make, and you could find yourself in court." 

He recommends businesses start thinking about their internal policies around AI, so their staff know when to use it, when not to use it and how the business communicates its policies on using AI to their customers and clients.

“If you've got those sorts of guardrails in place, it's nice and safely protected.

“Where you run afoul of that is where you don't have a policy or a framework, and people are just using it on a pretty ad hoc basis."

Sensitive processes, where bias or unfairness are statutorily forbidden, such as using AI in recruitment and restructuring, are danger zones.

“But if you don't have a policy and expect your staff to know, that's dangerous. That's where I've seen it come unstuck," O'Neil said.

“If you're not transparent, if you don't understand how it's been used within your business, that's when it gets dangerous.”