The Financial Conduct Authority, the Bank of England and HM Treasury published a joint statement saying frontier AI models are materially increasing cyber risk for regulated firms and financial market infrastructures because their cyber capabilities already exceed those of a skilled practitioner at higher speed, scale and lower cost. The statement says firms should be taking active steps under existing operational resilience rules and expectations to strengthen prevention, detection, containment, response and recovery capabilities. It does not introduce new expectations. The statement focuses on governance, vulnerability management, third-party risk, protection, and response and recovery. Boards and senior management should understand frontier AI risks and reflect them in strategy, investment and resourcing, including exposure from end-of-life or unsupported systems and the adequacy of insurance. Firms should accelerate triage, risk assessment and remediation of vulnerabilities, including through automation where appropriate, improve oversight of third parties, supply chains and open-source software, strengthen access management, network security and data protection, and consider automated or AI-enabled defenses. For response and recovery, firms are directed to the effective cyber resilience practices published by the Bank of England, Prudential Regulation Authority and Financial Conduct Authority in October 2025. The Government and UK financial authorities said they will continue to monitor frontier AI developments and engage with industry through the Cross Market Operational Resilience Group. Firms are also directed to relevant material from that group and the National Cyber Security Centre, including guidance on patch waves, defensive readiness for frontier AI and the use of AI models to find vulnerabilities.