AI Compliance for UK Financial Services SMEs: the 2026 Reference
A complete reference on what the FCA, ICO, and Treasury Committee expect from UK financial services SMEs using AI. Built from primary sources. Written for senior managers, compliance officers, and firm principals who need to know where the lines are.
Written by Dom Leigh · Former mortgage adviser and Project Manager at Bath Building Society · PRINCE2®-certified · Last updated May 2026 · 18-minute read · Includes downloadable AI Compliance Checklist
The UK has deliberately chosen not to introduce AI-specific regulation for financial services. The FCA's stated position is that existing frameworks (the Senior Managers and Certification Regime, the Consumer Duty, conduct rules, operational resilience requirements, the UK GDPR) already cover the safe use of AI by regulated firms. This is more demanding in practice than it sounds, because the accountability already exists and the rules already apply. The question is whether your firm can demonstrate it.
This guide is the complete reference for UK financial services SMEs on what those frameworks actually require when AI is in use. It draws directly from primary sources: the FCA's published AI approach, the ICO's Article 22 guidance, the Treasury Committee's January 2026 report on AI in financial services, the FCA's Mills Review, and the Data (Use and Access) Act 2025.
The FCA's stated approach
The FCA's position, published on its AI approach page, can be summarised in one sentence: AI is regulated through existing outcomes-based frameworks, not through AI-specific rules.
We do not plan to introduce extra regulations for AI. Instead, we'll rely on existing frameworks, which mitigate many of the risks associated with AI.
— FCA, "AI and the FCA: our approach"
This stance has three implications for SME firms.
First, your existing rule set covers your AI use. There is no separate compliance project to start. The conduct rules, SMCR, Consumer Duty, and operational resilience requirements that already govern your business now also govern your use of AI.
Second, the regulator is monitoring closely. The FCA and Bank of England launched the AI Consortium in May 2025 to gather views from financial services and technology sectors. The FCA's periodic AI survey, supervisory engagement, and market intelligence all contribute to its evolving view. You are not in a regulatory vacuum.
Third, the regulator expects you to be able to demonstrate compliance now. The absence of AI-specific rules does not mean the absence of AI-specific expectations.
The Mills Review
On 27 January 2026, the FCA launched a long-term review into how AI could reshape retail financial services. Led by Sheldon Mills, the review will report to the FCA Board in summer 2026, with practical guidance expected by end of 2026.
This review does not change our regulatory approach. We remain outcomes-based and technology-neutral, ensuring greater flexibility for us and firms to adapt to technological change and market developments.
— Sheldon Mills, FCA, Supercharged Sandbox Showcase speech, January 2026
Two things matter about the Mills Review for SME firms.
First, it confirms that more detailed guidance is coming. The Treasury Committee's January 2026 report explicitly demanded the FCA publish comprehensive practical guidance on AI by the end of 2026, covering: the application of existing consumer protection rules to AI use, and accountability and assurance expected from senior managers under SMCR for AI-caused harm. The FCA has accepted this expectation.
Second, until that guidance arrives, the existing rules apply. The FCA has not signalled any leniency in the interim. The phrase used by FCA Executive Director David Geale to the Treasury Committee in January 2026 was that individuals are "on the hook" for AI-caused consumer harm now, under existing rules.
SMCR and AI: who is personally accountable
The Senior Managers and Certification Regime is the framework the FCA uses to enforce personal accountability for individuals at regulated firms. Every senior manager has a Duty of Responsibility under the Financial Services and Markets Act 2000 to take reasonable steps to prevent or stop a breach of FCA rules in their area of responsibility. This Duty applies to AI use within their domain whether or not AI is explicitly mentioned in their Statement of Responsibilities.
Does this apply to small firms?
Yes. The FCA has expressly confirmed that SMCR accountability for AI applies to smaller firms classified as Core Firms or Limited Scope Firms, not just to Enhanced Firms. This is the single most important point for small UK financial services firms reading this guide. The mortgage broker, IFA, or boutique advisory firm operating under SMCR is subject to the same accountability principle as a large institution. The fact of being small does not reduce the personal accountability of the senior manager.
Was a separate AI SMF created?
No. The FCA and PRA both considered and rejected the idea of creating a dedicated Senior Management Function for AI. The accountability sits with existing senior managers in existing functions. For most SME firms, this means the SMF holder responsible for operations, risk, or compliance carries the AI accountability for their domain. There is no separate AI SMF to designate.
What does "reasonable steps" mean for AI?
The FCA has not yet published its detailed guidance on this (it is expected as part of the Mills Review output by end of 2026). Based on the regulator's stated position and analysis from major UK law firms, reasonable steps typically include: understanding what AI is being used in the senior manager's domain, having documented governance for AI use, ensuring meaningful human review of AI-influenced decisions, monitoring outcomes, and maintaining the ability to explain to the regulator how AI is being controlled. The senior manager is not required to understand the technical internals of the model. They are required to demonstrate the controls.
The Consumer Duty and AI
The Consumer Duty, in force since 31 July 2023, requires firms to deliver good outcomes for retail customers. The FCA has explicitly identified the Consumer Duty as relevant to safe AI use in its published AI approach.
For SME firms, the Consumer Duty's relevance to AI shows up in three places.
First, the products and services outcome: any AI used in product design, pricing, or distribution must support good customer outcomes.
Second, the price and value outcome: AI that affects pricing or value assessment must be documentable and explainable.
Third, the consumer understanding outcome: any AI-generated customer communication must support customer understanding, not undermine it. This is particularly relevant for firms using AI-drafted client correspondence: the duty to ensure clear, fair, and not-misleading communication still rests with the firm.
Article 22 UK GDPR and solely-automated decisions
Article 22 of the UK GDPR is the most-cited regulation in any discussion of AI compliance in the UK. Its scope is narrower than most people assume. Its requirements when it does apply are stricter than most people prepare for.
When does Article 22 apply?
Article 22 restricts decisions based solely on automated processing, including profiling, that produce legal or similarly significant effects on individuals. Three conditions must all be present.
First, the decision is made solely by automated means. If a human reviews the decision with active discretion to alter it (not rubber-stamping), Article 22 does not apply.
Second, the decision produces legal or similarly significant effects. In financial services, this includes credit and lending decisions, insurance underwriting decisions, account closure or suspension decisions, and material service-access decisions.
Third, the processing involves personal data.
The Data (Use and Access) Act 2025
The Data (Use and Access) Act, in force from 19 June 2025, reframes Article 22 from "prohibition with exceptions" to "right of challenge with safeguards." The new Article 22C safeguards must apply where solely-automated decisions with significant effects are made. The ICO's interpretation: organisations are encouraged to apply Article 22C safeguards to all decisions unless they are confident they can accurately separate those who will experience legal or similarly significant effects from those who will not.
What "meaningful human involvement" means
This is the test that determines whether Article 22 applies at all. The ICO's long-standing view, recently reinforced, is that human involvement must be active rather than tokenistic. A human reviewer must have actual discretion to alter the decision. Rubber-stamping does not count. A reviewer who routinely approves AI-generated decisions without examining them is not meaningful involvement.
Practical implications for SME firms
Most SME AI use cases (document handling, communication drafting, knowledge retrieval, internal admin, case preparation) involve meaningful human involvement and therefore fall outside Article 22. The senior team member reviews the AI output and has discretion to alter or reject it. This is the safe operating zone for SME firms.
The use cases that approach Article 22 territory (solely-automated lending decisions, solely-automated suitability assessments, solely-automated KYC outcomes) require Article 22C safeguards: transparency, the right to request human intervention, the right to contest the decision, and a Data Protection Impact Assessment.
Data Protection Impact Assessments for AI use
A DPIA is mandatory under UK GDPR for any processing likely to result in high risk to individuals. AI-influenced automated decision-making typically falls into this category. The ICO recommends conducting a DPIA for any meaningful AI implementation in a regulated firm, whether or not it strictly requires one.
A good DPIA for AI includes a description of the AI system and what it does; the categories of personal data processed; the lawful basis for processing; the risks to individuals identified; the mitigating measures in place (including the human review step); the monitoring approach; and the point of contact for individuals who want to challenge an outcome.
Operational resilience and AI vendors
The FCA's operational resilience requirements apply to AI in two ways: as a technology dependency within important business services, and as a third-party relationship for firms using vendor AI tools.
For SME firms, the practical implications are: identify the AI services that support your important business services, document the impact tolerances for those services if they were unavailable, ensure your AI vendors meet your operational resilience expectations, and have a recovery plan for AI outage or degraded performance.
The Treasury Committee's January 2026 report recommended that HM Treasury designate major AI and cloud providers as Critical Third Parties under the UK CTP regime. If this happens, large AI providers will face direct UK regulatory oversight as critical third parties to financial services firms.
A practical AI compliance checklist.
The following checklist captures the practical compliance baseline for an SME firm starting to use AI.
- Identify every AI tool currently in use in the firm, including embedded AI features in existing software.
- Map each AI use case to the firm function it supports and identify the SMF holder responsible for that function.
- Document the human review step in each AI workflow. For workflows that produce customer-affecting outcomes, confirm meaningful human involvement is present and effective.
- Conduct a DPIA for each significant AI use case, with particular attention to use cases that could approach Article 22 territory.
- Check your professional body, network, or principal firm's AI policy for additional restrictions or requirements applicable to your firm.
- Document the Consumer Duty implications of any AI used in product design, pricing, or customer-facing communication.
- Ensure your operational resilience documentation reflects the AI tools supporting your important business services.
- Establish a monitoring approach to track AI outcomes, flag drift or degradation, and identify when human review is catching errors.
- Train the team on AI use, including what AI may and may not be used for, what data may and may not be put into AI tools, and how to handle AI-generated output.
- Review the checklist annually or whenever a significant new AI tool is introduced.
Download the full checklist
AI Compliance Checklist for UK Financial Services SMEs
The expanded 12-page version of the checklist, with detailed prompts under each item, governance documentation templates, and a sample AI Use Register designed for SME firms.
No spam, no sponsors. Unsubscribe any time.
Where to start
The compliance baseline above is the floor, not the ceiling. The firms getting AI right in 2026 are not the ones with the most sophisticated AI; they are the ones with the cleanest governance around modest AI use cases that genuinely save time without putting the firm or its senior managers in a difficult position.
If you are an SME firm starting to think seriously about AI, the right next step is to score your readiness across the dimensions that matter most in regulated environments.
Take the AI Readiness Score.
Free. Five minutes. Includes a regulated-firm-specific assessment of governance baseline.
AI compliance, answered.
No. The FCA has explicitly decided not to introduce AI-specific regulation. AI is regulated through existing frameworks including SMCR, Consumer Duty, conduct rules, operational resilience, and UK GDPR. The FCA's stated position is outcomes-based and technology-neutral.
By the end of 2026. The Mills Review (launched 27 January 2026) is expected to deliver recommendations to the FCA Board in summer 2026. The Treasury Committee has formally requested that the FCA publish comprehensive practical guidance on AI consumer protection and SMCR accountability by year-end 2026.
Yes. The FCA has expressly confirmed that SMCR accountability for AI applies to Core Firms and Limited Scope Firms, not just Enhanced Firms. Small UK financial services firms operating under SMCR are subject to the same accountability principle as large institutions.
No. The FCA and PRA considered and rejected creating an AI-specific SMF. Accountability for AI sits with existing senior managers whose Statements of Responsibilities cover the relevant function.
Article 22 restricts solely-automated decisions with legal or similarly significant effects on individuals. In financial services, this affects solely-automated lending, underwriting, suitability, and material service-access decisions. Most SME AI use (document handling, communication, knowledge retrieval) involves meaningful human review and falls outside Article 22.
The reviewer must have active discretion to alter the decision. Rubber-stamping AI output without examining it is not meaningful involvement. The ICO has reinforced this position multiple times and is expected to issue updated guidance under the Data (Use and Access) Act 2025.
Not strictly. A DPIA is mandatory under UK GDPR for processing likely to result in high risk to individuals. Most automated decision-making subject to Article 22 falls into this category. For other AI uses, a DPIA is not legally required but is good practice for any meaningful AI implementation in a regulated firm.
The Consumer Duty requires firms to deliver good outcomes for retail customers. AI used in product design, pricing, distribution, or customer communication is in scope of the duty. Firms must demonstrate that AI use supports rather than undermines good outcomes, particularly under the consumer understanding outcome where AI-drafted communication is involved.
At minimum: a register of AI tools in use, documentation of each use case including the responsible SMF, the human review step, a DPIA where applicable, monitoring records, and incident logs for any AI-related issues. This is the audit trail that demonstrates SMCR oversight.
Your operational resilience documentation should anticipate this. The Treasury Committee has recommended that major AI providers be designated as Critical Third Parties under the UK CTP regime, which would impose direct UK regulatory oversight on those providers. Until that happens, the responsibility for resilience around AI use sits with the regulated firm.
The FCA's published approach is available at fca.org.uk/firms/innovation/ai-approach. The Treasury Committee's January 2026 report is published on the UK Parliament website. The Sheldon Mills speech launching the Mills Review is available on the FCA website. The ICO's Article 22 guidance is at ico.org.uk.
AI for UK Financial Services Firms
Where AI fits in a regulated UK financial services SME, where it doesn't, and what the FCA expects.
How to Implement AI in a UK SME
The process-first methodology behind every AI Process Audit.
The AI Process Audit methodology
How the diagnostic is delivered in a structured 2-week engagement.