When the Deep Thought supercomputer in The Hitchhiker’s Guide to The Galaxy revealed that the answer to life, the Universe and everything was 42, it raised what is becoming a more interesting question. How do computer algorithms determine answers? How do we know they are correct?

These questions are pressing because algorithms drive modern commerce and everyday life. They can determine anything from investment advice to potential matches on a dating app; from which products shoppers are recommended to buy to whether finance on a new car or house is approved.

The trouble is that it’s far from clear that decisions delivered in this manner are balanced, reliable and correct. Indeed, given that algorithms are fundamentally the product of human activities to achieve specific functions, it’s doubtful that, if left unchecked, they can provide objectivity.

Call for algorithm auditing

Hence many businesses are now being advised of the need to audit algorithms, just as they would their accounts. By bringing in an expert third party, organisations, their customers and regulators can be reassured their decision making is in safe hands.

It is becoming even more of a pressing question for businesses now that Artificial Intelligence is empowering computers to learn in a similar way to how humans develop insights. As they do, however, the fear is they can pick up on the biases people have. Already, scientists at Bath University and Stanford have revealed  AI-powered computer systems exhibiting gender and racial biases with everyday names and words.

Calling in the auditors

Some businesses are acting already. The founder of a site that rates New York landlords, Rentlogic, was recently featured in Wired magazine for commissioning an audit to provide reassurance his algorithm was operating fairly. The Mayor of New York, Bill de Blasio, has similarly announced a task force to ensure that the algorithms that drive decision making in the city are audited to ensure they do not exhibit bias.

It is likely to develop into a wider trend because, as a previous report published by us outlined, there are responsibilities companies need to be aware of when rolling out AI systems. There are potential issues around the proper performance of systems, as well as risks around corporate governance, accountability and legal and regulatory compliance.

For example, since May 2018, individuals have had the right, under GDPR, to receive certain explanations regarding any decision made about them by automated processing alone. Auditing of algorithms will assist companies in explaining the decision and, importantly for the GDPR requirements of accountability, will provide evidence their systems are properly governed and proven to operate without discrimination.

Without auditing algorithms as routinely as one would audit accounts, businesses are going to find it challenging to demonstrate the confidence that their computer systems are acting accurately, lawfully and without bias.