fbpx

ICO launches investigation into discriminatory AI recruitment

AI recruitment

The UK’s data watchdog is launching an investigation into potential bias when using artificial intelligence (AI) during the recruitment process. The probe will focus on alleged racial discrimination from automated recruitment systems.

The Information Commissioner’s Office (ICO) investigation is in response to accusations that automated recruitment software has been unfairly ruling out candidates who are members of minority groups.

AI is often used in recruitment, both to avoid overloading managers and to remove potential human bias in those hiring. However, there have long been concerns that it may be doing the exact opposite.

In 2018, ecommerce giant Amazon scrapped its AI recruitment tool after it was found to be discriminating against female candidates.

“We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds,” a spokesperson from the ICO said.

“We will also set out our expectations through refreshed guidance for AI developers on ensuring that algorithms treat people and their information fairly.”

AI recruitment bias: Algorithms ‘left to own devices’

Natalie Cramp, CEO of data science firm Profusion, said the investigation is “very welcome and overdue”.

Cramp said there have been “a number of recent incidents where organisations have employed algorithms for functions such as recruitment, and the result has been racial or sexist discrimination”.

According to Cramp, the issue could be that there is either bias being programmed into these algorithms, or in the data sets that they use.

“These algorithms have essentially been left to their own devices, leading to thousands of people having negative impacts on their opportunities,” Cramp added.

Joe Aiston, senior counsel at the law firm Taylor Wessing, said: “If recruitment software analyses writing or speech patterns to determine who weaker candidates might be, this could have a disproportionately negative impact on individuals who do not have English as a first language or who are neurodiverse.

“A decision made by AI to reject such a candidate for a role purely on this basis could result in a discrimination claim against the employer despite that decision not having been made by a human.”

Peter van der Putten, director AI Lab at US software firm Pegasystems, said that errors can creep into machine learning models “even if its designers have the best intentions”.

Putten added: “Therefore, organisations need to make sure that the data, models and logic being used to create their algorithms is absent of prejudice as much as possible, AI-powered decisions are continuously monitored for bias and material automated decisions come with complimentary automated explanation facilities.”