AI for UK Financial Services Firms: a Practical Guide

Most AI guidance is written for tech companies. This is written for the kind of regulated UK firm that has compliance to think about, an FCA register entry to protect, and senior managers personally accountable for what the business does with technology.

Written by Dom Leigh · Former mortgage adviser (7 years) and project manager at Bath Building Society · PRINCE2®-certified · Last updated May 2026 · 14-minute read

AI is being adopted across UK financial services faster than most regulators publicly acknowledge, and slower than most technology vendors privately admit. Between the hype and the hesitation, there is a practical question that most firms in this sector actually need answered: where does AI genuinely fit in a regulated UK financial services business, and where doesn't it?

This guide answers that question from a position grounded in the sector. I spent seven years as a regulated mortgage adviser at firms including Bank of Ireland and London & Country, three years as Project Manager at Bath Building Society delivering digital transformation in a regulated environment, and now help UK SMEs implement AI in their operations.

The short answer: AI fits well in the high-volume, low-judgment parts of a financial services firm's workflow. Document handling, client communication, knowledge retrieval, pre-screening of cases, internal administration. It does not fit (yet, and possibly never) in the parts where solely-automated decisions would create significant effects on individuals. The line between these two categories is not a vague boundary. It is defined in regulation. The rest of this guide walks through the practical map.

The state of AI in UK financial services

The adoption picture is uneven. Large institutions have moved fastest, particularly in fraud detection, customer service triage, and document processing. Mid-sized firms are experimenting. Smaller firms, particularly mortgage brokers, IFAs, and small consultancies, are mostly still at the cautious-curiosity stage. The pace is not driven by technology readiness; the technology has been ready for two years. The pace is driven by regulatory uncertainty and the question of senior manager accountability.

35–39%

of UK SMEs across all sectors are actively using AI tools as of mid-2025, but adoption in regulated financial services lags this baseline materially. (Source: UK government AI Activity in UK Business report.)

The FCA has been explicit that it is not introducing AI-specific regulation. Its approach is outcomes-based, technology-neutral, and rests on the principle that existing frameworks (SMCR, Consumer Duty, conduct rules, operational resilience) already cover the safe use of AI. This sounds permissive but is actually more demanding in practice. AI is regulated through the same rules that already govern your business, which means the accountability falls on existing senior managers, and the question is whether they can demonstrate it.

What the FCA actually expects

Most AI consulting content avoids the regulatory question. This guide will not. If you run a regulated UK financial services SME, you have legitimate compliance concerns about AI, and those concerns deserve a direct answer.

Here is the headline. In January 2026, FCA Executive Director David Geale told the Treasury Committee that individuals within financial services firms are "on the hook" for harm caused to consumers through AI. Not firms. Individuals. Under existing rules. This is the framing that should anchor every AI decision a regulated UK firm makes.

Individuals within financial services firms are on the hook for harm caused to consumers through AI.

David Geale, FCA Executive Director, evidence to House of Commons Treasury Committee, January 2026

The senior manager whose Statement of Responsibilities covers a given function is personally accountable for AI used in that function, whether or not the firm has explicitly assigned that accountability. The FCA expressly confirms this applies to smaller FCA-regulated firms classified as Core Firms or Limited Scope Firms under SMCR, not just to large institutions.

If you are an SMF holder in a small firm and your business is using AI in any meaningful way (including off-the-shelf tools like Microsoft Copilot or ChatGPT in compliance-adjacent workflows), you are accountable for the outcomes. The fact that the FCA hasn't yet published practical guidance does not change the position. The Treasury Committee has demanded the FCA publish that guidance by the end of 2026, and the FCA's Mills Review (launched 27 January 2026) is the vehicle expected to deliver it. Until then, the rules that already apply are the rules.

Where AI genuinely fits in a UK FS firm

The opportunities are real. The categories where AI delivers reliable value in regulated firms are well-mapped:

Document handling

Identity documents, bank statements, payslips, accountant references, tax returns, valuation reports, KYC packs. AI now reliably extracts structured data from these formats, flags inconsistencies between documents, and surfaces missing items. The compliance work itself still requires human sign-off (and should), but the extraction and matching work can be automated. Typical time recovered: 60 to 90 minutes per case.

Client communication

Drafting responses to client enquiries, summarising case status, generating update emails, preparing meeting prep notes from CRM data. AI drafts, humans review and approve before sending. This pattern keeps the firm safely on the human-in-the-loop side of every regulatory line. Typical time recovered: 1 to 3 hours per adviser per week.

Knowledge retrieval

The firm's accumulated knowledge (policy documents, lender criteria, suitability frameworks, internal procedures) becomes searchable and queryable in natural language. Advisers stop hunting through SharePoint or shared drives for the document they vaguely remember. Particularly valuable for firms with high case complexity and broad product knowledge requirements.

Internal administration

Diary management, expense categorisation, meeting transcription and action capture, internal reporting. These workflows are low-stakes from a regulatory perspective and offer immediate productivity gains.

Case pre-screening

AI runs initial verification work (data completeness checks, basic affordability flags, document inconsistencies) before a case reaches the adviser or underwriter. The adviser receives a case already-screened. Crucially, the decision still sits with the human. The AI is a preparation step, not a decision step. This keeps the workflow on the correct side of Article 22.

Where AI doesn't fit (the line you don't cross)

Equally important is what AI should not be used for in a UK financial services firm, particularly an SME without the governance infrastructure of a large institution:

Solely-automated lending or underwriting decisions

A decision to approve or decline a mortgage, loan, or credit facility, made solely by an automated system without meaningful human involvement, falls squarely within Article 22 UK GDPR's restrictions. Even if you wanted to deploy this (and most SMEs don't have the technical capacity to do so responsibly), the regulatory governance overhead is significant and is not where SMEs should be playing in 2026.

Solely-automated suitability assessments

The Consumer Duty places clear expectations on firms to ensure good outcomes for retail customers. A suitability assessment made solely by an automated system, without meaningful adviser involvement, runs directly at this duty. The adviser stays in the loop. Always.

Solely-automated KYC or AML decisions

Flagging, screening, and pre-checking with AI is fine and increasingly standard. Final KYC or AML decisions need human sign-off. The audit trail must demonstrate this.

Customer-facing chatbots making product recommendations

The FCA's perimeter report (March 2026) explicitly flagged the risk of general-purpose AI tools offering financial guidance or recommendations. If your customer-facing chatbot can be construed as giving regulated advice, you have a perimeter problem. Keep customer-facing AI to administrative tasks (appointment booking, document collection, status updates) unless you have specifically designed and governed it to operate within your regulatory permissions.

A practical implementation pattern for SME firms

Based on what fits and what doesn't, the implementation pattern that works for UK financial services SMEs follows three principles.

One: AI prepares, humans decide. Every AI-touched workflow should have a clear human decision point before any output reaches the customer or affects a customer outcome.

Two: Document the governance from day one. Every AI use case needs a written record of: what the AI does, who is accountable, what the human review step is, how outcomes are monitored, and what happens if something goes wrong. This is the audit trail that demonstrates SMCR oversight.

Three: Start with the lowest-stakes use case that delivers the highest time return. Document handling and client communication are almost always the best starting points. They deliver immediate productivity gains, sit safely away from Article 22 territory, and build the team's confidence with AI tools before any more complex use case is attempted.

What if I'm inside a network or compliance framework?

A practical reality for many UK mortgage brokers, IFAs, and ARs: AI policy is set at the network level, not the firm level. Your principal firm or network may have explicit rules about which AI tools you can use, what data you can put into them, and what workflows are permitted.

If you're an Appointed Representative or a member firm in a compliance framework, the first step is checking your network's published AI policy (most major UK networks have issued one in 2025 or early 2026). The second step is recognising that most networks' AI policies are permissive in low-risk areas (administrative use, document drafting, internal productivity) and restrictive in customer-facing or advice-adjacent areas.

This means you almost certainly can adopt AI for internal productivity gains without breaching your network's policy. You almost certainly cannot deploy a customer-facing AI advice tool without explicit approval. The middle ground (case preparation, document handling, communication drafting) is where your network's policy needs to be read carefully.

Where to start.

AI in a UK financial services firm is not a yes-or-no question. It is a where-and-how question. The AI Readiness Score pays particular attention to the readiness dimensions that matter most in regulated environments: data quality, process documentation, and governance baseline.

AI in UK financial services SMEs.

Yes, in most operational and administrative contexts. AI is well-suited to document extraction, client communication drafting, knowledge retrieval, and case preparation. It is not appropriate for solely-automated affordability or suitability decisions, which fall within Article 22 UK GDPR's restrictions and run against the Consumer Duty. Network policy may impose additional restrictions on customer-facing use.

Yes, for client communication, meeting preparation, document summarisation, and knowledge retrieval. AI should not be used for solely-automated suitability assessments or solely-automated recommendations. The adviser remains in the loop on every regulated advice action. Most UK networks permit AI for administrative and preparatory work without specific approval.

The FCA expects firms to apply existing rules (SMCR, Consumer Duty, conduct rules, operational resilience) to their use of AI. There is no separate AI rulebook. Senior managers are personally accountable for AI used in their domain. The FCA has launched the Mills Review (January 2026) into AI in retail financial services and is expected to publish practical guidance by end of 2026.

Article 22 applies only to decisions made solely by automated processing with legal or similarly significant effects on individuals. Most SME AI use (document handling, communication drafting, knowledge retrieval, administrative work) falls outside Article 22 because it involves meaningful human involvement. AI that influences a decision but where a human reviews and has discretion to alter the decision is not subject to Article 22.

The senior manager whose Statement of Responsibilities covers the relevant function, under the Senior Managers and Certification Regime. The FCA has confirmed this applies even to smaller firms classified as Core Firms or Limited Scope Firms. The FCA has also decided NOT to create a separate AI-specific Senior Management Function, meaning accountability sits with existing senior managers.

Yes. The Consumer Duty requires firms to deliver good outcomes for retail customers. Any AI use that affects a retail customer outcome (advice, suitability, pricing, service delivery) is in scope. Firms must be able to demonstrate that AI use supports rather than undermines good outcomes.

A Data Protection Impact Assessment is mandatory under UK GDPR for any processing likely to result in high risk to individuals. Most automated decision-making subject to Article 22 falls into this category. Many other AI uses don't legally require a DPIA but it is good practice to conduct one for any meaningful AI implementation in a regulated firm.

This is the open question the FCA has acknowledged and the Mills Review is examining. The practical answer for SMEs in 2026: scope AI use to cases that don't require full model explainability (document extraction, communication drafting, retrieval), keep humans in the loop on every customer-affecting decision, maintain decision logs and audit trails, and document the governance in plain English. You don't need to be able to explain the model's weights. You need to be able to demonstrate the controls around it.