Threesixty: Due diligence tips for the AI revolution

Shutterstock / Toey Andante

How AI can be used compliantly to the advantage of advice firms and their clients is not widely understood.

However, with new and exciting AI-based products entering the market, firms are beginning to worry not integrating it into their processes now could leave them at a disadvantage.

Running a typical advice firm includes a myriad of processes. Most will already be supported by a form of technology, but, in many cases, AI has the potential to improve efficiency even further.

Financial services regulators have a close interest in the safe and responsible adoption of AI within the industry

Already systems can be used to record, listen and interpret client meetings, write summaries of that meeting and draft suitability reports. If all that’s needed is a sense check and final approval by an adviser, what impact could this have in your business?

It won’t be long until other affordable and simple-to-use AI solutions are available for other core business processes. Integrations with other systems, such as back office and electronic identity verification systems, are also on the horizon.

Even those that don’t directly purchase an AI solution for their business will be exposed to AI-based developments such as provider chatbots and the inclusion of AI in business mobile phone operating systems (for example, Apple Intelligence, Samsung Galaxy AI, and Google Gemini AI).

Firms are beginning to worry not integrating AI into their processes now could leave them at a disadvantage

AI is already being used by large financial services companies to tackle financial crime, such as in transaction monitoring. Unfortunately, however, the power of AI is also being harnessed by cybercriminals, by improving their targeting of ransomware and phishing scams.

Regulatory focus

Reflecting the government’s ‘pro-innovation’ approach to AI regulation, financial services regulators have a close interest in the safe and responsible adoption of AI within the industry.

There have been several policy papers on this topic over the past few years. Most recently, in April, the Financial Conduct Authority published an AI update, explaining that:

  • The FCA’s role is as a technology-agnostic financial services regulator; it is not a regulator of technology.
  • The FCA actively supports beneficial innovation. It believes AI can bring efficiencies and benefits to firms and their clients; however, there are potential risks which need to be addressed.
  • The FCA’s focus is on how firms can safely and responsibly adopt AI technology and on understanding the impact AI innovations are having on consumers and markets.
  • Many risks related to AI are not necessarily unique to AI and can therefore be mitigated within existing frameworks.
  • The Senior Managers and Certification Regime emphasises senior management accountability and is relevant to the safe and responsible use of AI.
  • Where a firm’s use of AI results in a breach of FCA rules (for example, because an AI system produces decisions or outcomes which cause consumer harm), there is already a range of regulatory mechanisms through which firms can be held accountable.
  • AI needs to be considered in the context of existing regulatory obligations, including Consumer Duty. The Consumer Duty cross-cutting obligations apply when using AI solutions and firms using AI need to consider any differing needs and characteristics of vulnerable customers.
  • Data security, cyber and fraud risks need to be assessed. Where AI is used to process personal data, data protection obligations need to be met.

Over the next 12 months, the FCA will continue to improve its understanding of AI deployment in UK financial markets and collaborate with other regulators through the Digital Regulation Co-operation Forum.

Due diligence

Senior managers are ultimately accountable for the activities of their firm, including the deployment of any AI solutions, so it’s critical these solutions are rigorously assessed before implementation.

Conducting due diligence on third-party AI solutions is acknowledged as a key challenge because firms and their staff may not have the necessary technical expertise to understand how the AI system works to generate the outputs.

Conducting due diligence on third-party AI solutions is a challenge because firms may not have the technical expertise to understand how it works

There is a risk of over-reliance on information provided by unregulated third-party providers. Any due diligence should include thorough testing of the outputs to make sure these are accurate and appropriate.

Where AI systems access and use personal data, it’s essential to consider your data protection obligations.

The Information Commissioners Office provides guidance and resources on applying data protection principles to the use of AI systems. To minimise data protection risks, you should complete a Data Protection Impact Assessment.

After implementation, you need to monitor the solution to ensure it is delivering as expected and in line with Consumer Duty requirements.

It is important to bear in mind many AI systems are productivity tools and have no ‘emotional’ intelligence

Firms should already have due diligence procedures for assessing third parties and outsourcers. However, with AI systems and tools, it’s worth building in the following additional questions:

  • A description of the tool’s functionality
  • How client data is used and owned? (Is client data locked within the tool and owned by the firm? Can client data be processed externally by the AI model and processed in ways the client would not expect?)
  • What data analysis does the tool perform?
  • What methods does the tool employ to improve its output?
  • Where is the data stored? Data cannot be stored outside of the UK unless there are adequacy decisions or safeguards in place.
  • Where applicable, how the system integrates with other systems such as back-office systems, company databases, video-conferencing systems, platforms/trading systems?
  • How customisable is the system?
  • What service standards are in place and are there any quality assurance processes in place to ensure the tool continues to meet these standards?
  • How willing is the provider to deliver staff training?

AI isn’t going away and technology providers will continue to innovate.

AI tools can’t replace a human paraplanner, adviser or investment manager, but can be used to help them perform their roles better and give them more time to use their soft skills with clients.

AI tools should not be avoided. They could bring benefits to your business. However, it is important to bear in mind that many AI systems are productivity tools and have no ‘emotional’ intelligence.

Tony Lewis is policy consultant at Threesixty Services

Comments

    Leave a comment

    Recommended