Legal liability forces the technology to colour within the lines.
Ethical artificial intelligence and machine learning may sound like an undergraduate elective, but it is a topic that financial institutions need to address urgently.
Firms are exposing themselves to a new type of risk as they either develop AI and machine-learning models or rely on the growing number of third-party model providers.
Do these new models harm a specific subset of the population or unintentionally use practices that market regulators have deemed illegal?
It can be hard to tell since AI and machine learning engines are good at dealing with black and white, but are horrible when it comes to shades of grey.
These engines are only as good as the data that feeds them.
Most of the data sets used to train instances of AI and machine learning are so incredibly large that individuals cannot comprehend everything that might be in those data sets. If some or all of the training data is the result of previously biased behaviour, it shouldn’t be surprising that the resulting models include a portion of that biased behaviour.
However, making sure that AI and machine learning engines colour within the ethical lines is exceedingly tricky when developers have to hardcode an abstract concept of “fairness” in precise mathematical terms.
When working on a paper regarding this topic, Natalia Bailey, associate policy advisor, digital finance at the Institute of International Finance, found approximately 50 definitions for fairness, she said during a recent AI summit in Midtown Manhattan.
Firms may think they have some time to sort this out as they did with data privacy issues before various states enacting their data-privacy regimes and the EU rolling out it General Data Privacy Regulation, they do not.
As Emma Maconick, a partner at the law practice of Shearman & Sterling and who spoke on the same panel noted, the law is ahead of the game respecting the liability a firm faces from a misbehaving AI. The well-trodden laws that address misbehaving children or employees, known as vicarious liability, also cover supervised and non-supervised AI engines.
If financial institutions have not incorporated an ethical analysis as part of their AI development process, there is no time to wait to do so.