The Bermuda Monetary Authority has published a stakeholder letter summarising feedback on its July 2025 discussion paper on the responsible use of artificial intelligence in Bermuda’s financial services sector and setting out how the input is shaping ongoing policy work. The Authority emphasises that the letter does not introduce new AI-specific regulatory requirements and that its approach will remain outcomes-focused, principles-led and applied through the existing regulatory framework. Stakeholders broadly supported a technology-neutral supervisory approach and urged the BMA to integrate AI considerations into established governance, conduct, risk management, cybersecurity, operational resilience and third-party oversight arrangements rather than creating duplicative AI structures. The Authority also clarified that proportionality should be anchored in the risk profile and impact of specific AI use cases, that boards remain accountable for AI outcomes and should maintain proportionate AI literacy, and that higher-risk capital markets use cases may warrant sector-specific supervisory clarification given potential behavioural and market integrity risks. The letter also highlights cross-border alignment concerns for group-wide AI governance and flags third-party concentration and dependency risks, including potential supervisory focus on mapping critical AI-related service provider dependencies. Next steps centre on monitoring how existing frameworks are being applied to AI, continued stakeholder engagement and tracking international developments. Any enhancements to the regulatory framework would be developed incrementally and proportionately and would follow the BMA’s normal consultation process.