A lead analyst at the Bank of England has warned financial institutions of the danger of bias when implementing AI models.
In a recent blog post, Kathleen Blake said that firms expect the use of AI and machine learning to more than triple over the next three years, thereby exacerbating the need to understand the associated ethical considerations.
Blake warned that AI models may be biased by design.
“This is often the result of biases embedded in training data but can also be a result of the structure of the underlying model,” said Blake.
The analyst said these underlying biases can lead to discriminatory algorithmic decisions in sectors such as insurance and banking.
“Through biased data, AI models can embed societal biases and deploy them at scale,” Blake warned, suggesting that relying on historical data to train AI models in areas such as mortgage lending could see the return of unlawful redlining, a practice in which mortgage providers offered exploitative interest rates to ethnic minorities targeted through geographic data.
“There is a risk of such algorithms learning to copy patterns of discriminatory decision-making.”
The UK is set to host an international summit on AI safety in November, in which governments, industry members and academics will discuss how to mitigate the greatest risks posed by frontier AI.
The state-hosted summit has already faced criticism from opposition ministers that its focus on the grandest potential risks of AI will ignore the most immediate dangers of the technology, including bias in algorithms.