The Bank of England published a summary of three late-2025 roundtables with representatives from Prudential Regulation Authority (PRA)-regulated firms on responsible adoption of artificial intelligence and machine learning (AI and ML), aimed at understanding constraints on deployment and how the Bank and PRA could support adoption. Participants across challenger and UK-focused larger banks, global systemically important banks and insurers generally supported the PRA’s principles-based, outcomes-based approach, with Supervisory Statement 1/23 on Model Risk Management (SS1/23) cited as enabling responsible innovation; most did not see a need yet for detailed AI-specific rules or a Bank or PRA AI sandbox, noting the Financial Conduct Authority’s (FCA) Supercharged Sandbox and AI Live Testing as sufficient for testing. The discussions highlighted practical frictions, including cautious second-line risk functions, skills bottlenecks and the challenge of evidencing compliance as generative AI and agentic systems proliferate, with participants questioning whether traditional model validation focused on model interpretability remains sustainable and calling for greater emphasis on testing, monitoring and outcome guardrails. Firms also pointed to cross-border regulatory fragmentation between the UK approach, US expectations (including Supervisory Letter SR11-7) and the EU Artificial Intelligence Act as increasing compliance costs and limiting scalability, and encouraged the Bank to use international fora to support greater coordination. Additional constraints included slow procurement and contracting with third-party AI providers due to uneven understanding of regulated-firm requirements, data protection and emerging data sovereignty regimes (including situations requiring Data Protection Impact Assessments), and insurance-specific data quality limitations that may constrain near-term use cases such as hyperpersonalised products.