The Bermuda Monetary Authority published a discussion paper on the responsible use of artificial intelligence across Bermuda’s financial services sector, setting out proposed supervisory expectations and inviting industry feedback to inform future guidance. The paper proposes a principles-based, outcomes-focused approach that integrates AI governance into existing risk frameworks and assigns ultimate accountability for AI outcomes to boards of directors. The proposals centre on risk-proportionate governance, including an expectation that firms identify and maintain inventories of AI systems and assess them across five dimensions: impact severity, autonomy and human oversight, complexity and explainability, data sensitivity, and deployment context and scale. The paper outlines expectations for data governance (including compliance with Bermuda’s Personal Information Protection Act 2016 and cross-border data considerations), model selection and independent validation, monitoring and change management (including incident response), transparency and fairness controls calibrated to stakeholder needs, and third-party AI due diligence and concentration risk management. It also highlights specific controls for generative AI and emerging agentic AI, and sets AI-specific cybersecurity and operational resilience considerations, including an emergency override for high-risk and critical applications. Submissions are due by 30 September 2025. The Authority plans to analyse feedback in Q4-2025, hold follow-up consultations and industry workshops in Q1-2026, and publish a final proposal in Q3-2026.
Bermuda Monetary Authority 2025-07-30
Bermuda Monetary Authority consults on outcomes-based supervisory expectations for responsible AI use in financial services
The Bermuda Monetary Authority released a discussion paper on responsible AI use in financial services, proposing supervisory expectations and seeking industry feedback. It advocates a principles-based, outcomes-focused approach, integrating AI governance into existing risk frameworks and assigning accountability to boards. Key proposals include risk-proportionate governance, data governance, model validation, transparency, and specific controls for generative and agentic AI.