The Bank for International Settlements Innovation Hub has launched Project Noor, an initiative to give financial supervisors independent, practical tools to evaluate and interpret the inner workings of artificial intelligence models used by banks and other financial institutions. By combining explainable AI techniques with risk analytics, the project aims to produce a prototype that helps supervisors verify model transparency, assess fairness and test robustness. Led by the BIS Innovation Hub Hong Kong Centre in collaboration with the Hong Kong Monetary Authority and the United Kingdom’s Financial Conduct Authority, Noor will apply explainable AI in a controlled setting to convert complex model logic into plain language and intuitive visuals while preserving privacy. The prototype is framed around common use cases such as mortgage approvals, credit card limit-setting and fraud flagging, where decisions can be difficult to explain to customers and supervisors. The BIS notes that financial institutions remain responsible for model explainability and that Noor does not seek to prescribe definitive standards or replace existing practices, but to provide methods and benchmarks that supervisors can use to form their own judgements.