Employers are faced with making hundreds of decisions a day – some more complex than others. In recent times, some employers have turned to Artificial Intelligence (AI) technology to help them with these decisions, particularly where they involve the consideration of large amounts of information. Other employers have used AI to monitor whether employees are meeting their obligations. Others have not yet realised that their team may be using AI without their knowledge.
AI tools have the potential to change the workplace in a positive way. However, there are a number of legal obligations employers should consider before deciding to use AI tools and managing employees use of those tools.
What is AI technology?
AI can be defined as “algorithm-based technology that solves complex tasks by carrying out a function that previously required human thinking”[1]. It is designed to mimic the human thought and reasoning process.
At present, the use of AI technology in the workplace is not regulated in New Zealand. Overseas, the European Union is considering an Artificial Intelligence Act and the United Kingdom and the United States are considering the regulation of AI through policies and frameworks.
What should employers be aware of about AI and when using AI?
Some factors that all employers should consider regarding AI, include:
- AI is probably being used in your workplace whether you know about it or not.
- Many employers think that because they are not providing AI technology in the workplace that it is not being used. This is simply not the case. There was the recent case that became known worldwide about a lawyer in the United States of America submitting a legal brief to a court that was AI generated.
- It is essential that, as an employer, you are ahead of the game. Banning AI in the workplace is likely not going to keep you competitive in the market in the medium to long term, so as an employer you should at least ensure you are setting expectations of how AI is being used. At an absolute minimum you need to have protections in place to ensure that the legal risks are being mitigated.
- Protecting the confidentiality of commercially sensitive information and intellectual property.
- Employers and employees need to be acutely aware that the data submitted to AI tools is being sent and stored. If a workplace is using a particular AI tool, there needs to be consideration of what information is being submitted to that tool. For example Chat GPT is a commonly known AI tool. However, it is also an open platform and therefore private, confidential or sensitive information should never be input into it. In addition, the submission of information into the AI product should consider relevant laws such as the Privacy Act 2000 and policies (including insurance policies).
- Protecting the privacy of your employees, customers and clients.
- Employers’ obligations under the Privacy Act will apply when using AI tools. Employers should ensure that the manner in which information is collected and used is justified (ie. fits the purpose for which it was gathered). For example, an employer may look to justify the use of AI monitoring tools based on health and safety grounds if they alerting the employer to employees not wearing appropriate PPE or entering a restricted area. However, privacy considerations and how that information is used must be considered first.
- Ensuring the AI work product is free from bias and discrimination.
- An understanding of how an AI tool works is essential if you are using it. The information produced by an AI tool could be biased or incomplete. Therefore if an employer is, for example, using AI to assist with hiring processes, this could expose employers to legal claims that they have made an unlawful decision by relying on AI-generated information. While AI tools may be an aide, employers should be prepared to consider additional information when making decisions.
- Maintaining trust between employer and employee.
- The employment relationship is built on mutual trust and confidence. Lack of transparency around use of AI or an over-reliance on it by either party could undermine this mutual trust.
Looking forward
While there is currently no prescribed way to use AI technology at work, we urge employers to use it consistently with their employment law obligations (in legislation and in existing employment agreements and workplace policies), and not as a way to avoid these.
It has recently been found that only 12% of Kiwi organisations have policies in place for AI.[2] We consider this percentage needs to increase rapidly to keep up with the pace at which AI will alter the workplace. Creating and implementing an AI policy promotes transparency around the use of AI at work. It also sets expected standards of conduct and accountability for both employers and employees when using AI in the current environment.
Get in touch with the team at Black Door Law for tailored advice around AI policies and the use of AI in the workplace.
Disclaimer: This information is intended as general legal information and does not constitute legal advice. If you have a specific issue and wish to discuss it, get in contact with the Black Door Law team.
[1] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/part-1-the-basics-of-explaining-ai/definitions/.
[2] According to a study undertaken by Perceptive, as reported in HCA Magazine https://www.hcamag.com/nz/news/general/new-zealand-researchers-lead-framework-for-ai-use/464737.