The Bank for International Settlements published an FSI Occasional Paper on how regulators can respond to limited explainability in complex artificial intelligence models used by financial institutions. It sets out why explainability underpins transparency, accountability and compliance, and why it becomes harder to meet existing model risk management (MRM) expectations as advanced AI is deployed in critical business activities and in some cases for regulatory purposes. The paper finds that global standard-setting guidance on MRM is largely high level and that only a small number of national authorities have detailed MRM guidelines, which often focus on regulatory-capital models and frequently treat explainability implicitly through governance, documentation, validation, deployment monitoring and independent review requirements. It details how deep learning and large language models can be difficult to explain and reproduce, with third-party and proprietary models further constraining transparency, and it reviews common post hoc techniques such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME) and counterfactual explanations, alongside limitations including inaccuracy, instability, lack of ground truth and susceptibility to misleading explanations. Potential adjustments discussed include setting use-case-based explainability standards aligned to model risk, requiring a suite of explainability methods and documentation that meet the needs of different stakeholders, and explicitly recognising trade-offs between explainability and performance with compensating safeguards such as enhanced validation and monitoring, stronger data governance, human oversight and circuit breakers. For regulatory-capital use cases, the paper raises options such as restricting complex models to certain risk categories or exposures or applying output floors, and it points to the need for supervisors to upskill to assess firms’ explainability submissions.