Howard Levitt: Keeping the human in human resources as AI takes hold

In this article:
Robot hands and fingers point to laptop button advisor chatbot robotic artificial intelligence concept
Robot hands and fingers point to laptop button advisor chatbot robotic artificial intelligence concept

By Howard Levitt and Jeff Buchan

As artificial-intelligence technology continues to develop at an increasing pace, so will its use by employers. After all, the allure of cost savings, increased efficiency and being on the cutting edge has many looking for new ways to harness it.

But when it comes to the application of AI in human resources, employers must be aware of the risks.

Many AI models used in HR operate through the use of machine-learning algorithms, which, put simply, are “trained” based on the input of information provided by its user. If an employer provides its AI model with a skewed or small sample size of information, or only provides it with historical company data, this creates the potential for, among other things, unlawful discrimination in both the hiring and firing processes.

As an example, if the AI model is trained using historical company data and that company has a male-dominated workforce, the AI model may begin to view male job applicants more favourably.

These issues equally apply when using AI in determining who to fire.

In Ontario, if it can be established that if any (that is, even one per cent) of an employer’s decision to dismiss an employee was based on a protected ground under the Human Rights Code (for example, gender), that employee will have grounds to bring a claim for discrimination.

You can imagine the degree of discrimination that could occur if an employer’s AI model begins considering characteristics in decision-making processes that are otherwise protected under human rights or employment standards statutes.

In the face of such a claim for discrimination, an employer will have a hard time convincing anyone, let alone a judge or adjudicator, that they should not be responsible because it was “AI’s fault” and not deliberate — even if proven true, it will be no defence.

A very recent case involving Air Canada and its online AI chatbot that provided false information to a customer serves as a stark reminder that employers need to be cautious in over reliance on AI.

Air Canada attempted to suggest to the Civil Resolution Tribunal in British Columbia that the chatbot found on its website was a separate legal entity and responsible for its own actions and the information it provided customers — the effect being that Air Canada cannot be liable for the false information provided to the customer. Unsurprisingly, the tribunal disagreed and Air Canada was held liable for the actions of its chatbot.

While the Air Canada case did not involve discrimination, it demonstrated adjudicators’ propensity to hold employers liable for the actions of their AI technologies, something employers should not take lightly given the rapid expansion of AI workplace applications.

Aside from AI’s application in the hiring and firing process, it is also increasingly being used to monitor the productivity of employees, particularly in office settings and those who work from home. It can be used to track everything from the number of keystrokes to an employees’ eye movements to ensure they are staring at their monitors.

In turn, this information can then be used to make decisions regarding promotions and salary increases. An inevitable consequence is that employees feel they are not trusted to carry out their duties without constant monitoring, and/or that their unique circumstances are not being considered when their productivity is being assessed. They would generally be wrong.

It is therefore important for employers to strike a balance between the implementation of AI while still maintaining the human aspect of human resources. Those looking to implement or expand their use of AI technologies in their workplaces should consider the following:

  • Establishing an AI policy that outlines the ways in which AI is being used, and make the policy available to job applicants and employees for the purpose of transparency.

  • Regularly auditing your AI tools to ensure that inadvertent biases are not being created or perpetuated in HR-related processes.

  • Wherever possible, ensure that AI models are trained on data that is representative, diverse and unbiased.

  • And be cautious of being overly reliant on AI as employers will be liable for the actions of the software being used.

Given the complexities involved in people management, we are a long way from being able to solely rely on AI in the workplace to make decisions that require individual discretion and the consideration of characteristics that fall outside the purview of what AI is capable of today.

Howard Levitt is senior partner of Levitt Sheikh, employment and labour lawyers with offices in Toronto and Hamilton. He practices employment law in eight provinces. He is the author of six books including the Law of Dismissal in Canada. Jeff Buchan is a lawyer with Levitt Sheikh.

Bookmark our website and support our journalism: Don’t miss the business news you need to know — add financialpost.com to your bookmarks and sign up for our newsletters here.

Advertisement