AI in auditing: Embracing a new age for the profession

08 July 2024. Published by Ash Daniells, Senior Associate

Artificial Intelligence (AI) is a rather new concept for many (ignoring those versed in 80’s Sci-Fi movies); it’s something many don’t know much about and certainly don’t use in our day-to-day lives (or at least appreciate we are using). However, that’s not the case for everyone. Auditors have long been reaping the benefits of AI, but are auditors just scratching the surface of what AI can offer and what impact will an increased use have on their insurance requirements and claims they face?

Published by RPC with commentary from Arch Insurance

The origins of auditing can be traced back to the 18th Century. I know, a blog on auditing and we’re starting with a history lesson? But don’t stop reading, trust us. For a process that’s been in place for centuries, you might consider that auditors would be slow to embrace the opportunities offered by AI. However, that’s not the case. In fact, auditors (or certainly the larger auditing firms) have been making use of AI for some time. AI is typically used to streamline audits, making the entire process more efficient. Of course, the creation of AI products (such as Chat GPT) means that AI is now much more widely available and we’re anticipating that AIs use in audits will significantly increase as time passes and that most auditors will eventually make use of AI to improve their processes. In what way we hear you ask? Read on to find out more.

Use of AI in auditing

The exact ways in which auditors are currently using AI remain largely unknown (until such time as firms start to reveal their uses). It’s anticipated though that AI will be used in the following ways:

  • real time auditing could be undertaken 24/7
  • analysis of large volumes of data for patterns and anomalies
  • identifying ‘unusual’ transactions
  • testing to consider resilience of a firm and predict future outcomes/whether a firm can remain a going concern.

In short, AI could be used for some of the more labour-intensive tasks, freeing auditors for the more complex tasks.

Advantages of using AI in auditing

Many of the recent claims against auditors (or certainly the most high-profile) have centred around mistakes made by auditors in respect of accounting treatment. AI might have noticed, or at least limited, the errors that took place in these cases where it can run data against accounting standards. Equally, AI can provide more accurate risk assessments which could in turn provide a better insight into a company’s financial health and viability – meaning auditors will be able to establish whether a company is in financial difficulty earlier and easier. The use of AI is likely to reduce the risk of failing to spot issues such as fraud. For example, AI algorithms can review large volumes of data, thereby identifying patterns and anomalies – and as a result, potentially identify fraudulent activities more promptly.

Some claims arise as a result of auditors failing to ask the right questions and/ or operating with sufficient professional scepticism, perhaps because the auditor has worked with the client for several years, or they may simply believe what they are told without challenge. AI may be able to analyse data on a more dispassionate basis and could be useful in identifying gaps and testing answers provided.

Without wishing to oversimplify, AI has the potential to improve audit procedures – providing better insights and uncovering issues that may otherwise go unmissed. Whilst eliminating error entirely is unlikely, damage that may occur can be limited with appropriate use of AI.

Risks of using AI in auditing

The use of AI (unsurprisingly) is not without risk. There are various examples of problems that may arise, and the International Monetary Fund (IMF) published a report in August 2023 which considered the risks involved when AI is used in financial services. We consider some of these risks below.

Whilst AI can remove some of the risk of human error, it still places reliance on humans inputting the correct data. A formula/test will need to be created and any errors are unlikely to be picked up immediately – meaning the formula/test could be applied to a number of audits for different clients – creating a systemic risk. Put simply, there is still a reliance on the correct data being input initially and it’s perhaps more important if AI is being used. In the same vein, AI cannot identify if an answer is actually right or wrong – it can only confirm if the data/test has been applied correctly. So, whilst reliance can be placed on AI and the work it produces, there’s no guarantee it will be correct.

Whilst one of the benefits of AI is that it can do things that humans simply cannot (or perform the task quicker than a human), the risk equally is that there is a lack of transparency about how outcomes are reached. When you are unable to see how a decision has been made, opportunity for oversight is potentially lost. Auditors will not be immune from claims if they are found to have placed too much reliance on AI.

It’s possible that human bias may also filter through into a system’s algorithms – a scary thought and one that does sound like a bad Terminator sequel. There’s also an undeniable risk in respect of a data breach – AI models require vast amounts of data to run efficiently and so auditors will need to be careful to ensure that confidential data is ring fenced and secured. A data breach with the level of data contained in the AI systems could be catastrophic for a firm.

Insurance implications

The risks associated with AI could impact a variety of insurance policies, however in the context of auditing, professional indemnity policies are likely to be most impacted due to the risk of professional negligence and tort.

A key question for claims arising out of the use of AI, is whether it would be a data breach or an AI result causing a loss and who should be held responsible? Should it be the manufacturer, developer, the user, a mixture of all parties, or even someone or something else entirely?

In order to ascertain where the responsibility lies, insurers may request information including:

  • whether the loss was caused as a direct or indirect result of the use of the AI system
  • how the loss came about, for example as a result of user error or system malfunction and/or
  • whether the loss could have been foreseen.

Given the requirement for professionals providing specialist services to implement recognised practices and procedures, the Master of the Rolls, Sir Geoffrey Vos, recently raised a question of whether a business or individual could in the future become negligent themselves by not introducing AI into their practices. It’s certainly food for thought and in our litigious culture, it is possible that a claim may eventually arise for this very reason.

Missed information is often a cause of professional indemnity claims within accounting and auditing processes. For example, information provided to the advisor being overlooked due to volume or a misunderstanding of how principles are applied. Whilst AI will look to minimise the number of errors that could arise, it’s possible that human error will remain a primary cause of loss, so it’s critical that businesses have adequate training and controls in place for users of AI.

As AI continues to progress, it’s likely that we will see further guidance and judgments from courts and regulators providing greater clarity around responsibility and potential redress when things go wrong. It is expected that regulators will publish further guidance in early 2025 which is keenly awaited.

With thanks to Amy Corke (Claims Handler at Arch Insurance) for her contribution.

Stay connected and subscribe to our latest insights and views 

Subscribe Here