The Bank for International Settlements Innovation Hub has launched Project Noor, an initiative to give financial supervisors independent, practical tools to evaluate and interpret the inner workings of artificial intelligence models used by banks and other financial institutions. By combining explainable AI techniques with risk analytics, the project aims to produce a prototype that helps supervisors verify model transparency, assess fairness and test robustness. Led by the BIS Innovation Hub Hong Kong Centre in collaboration with the Hong Kong Monetary Authority and the United Kingdom’s Financial Conduct Authority, Noor will apply explainable AI in a controlled setting to convert complex model logic into plain language and intuitive visuals while preserving privacy. The prototype is framed around common use cases such as mortgage approvals, credit card limit-setting and fraud flagging, where decisions can be difficult to explain to customers and supervisors. The BIS notes that financial institutions remain responsible for model explainability and that Noor does not seek to prescribe definitive standards or replace existing practices, but to provide methods and benchmarks that supervisors can use to form their own judgements.
Bank for International Settlements - Innovation Hub 2025-08-18
Bank for International Settlements Innovation Hub launches Project Noor to help supervisors explain and test financial AI models
The Bank for International Settlements Innovation Hub has launched Project Noor to equip financial supervisors with tools to evaluate AI models in banks, focusing on transparency, fairness, and robustness. Led by the BIS Innovation Hub Hong Kong Centre with the Hong Kong Monetary Authority and the UK's Financial Conduct Authority, the project aims to convert complex model logic into plain language and visuals for use cases like mortgage approvals and fraud flagging, without setting definitive standards.