The Bank of Italy published a research paper, “Chat Bankman-Fried? Notes on the ethics of artificial intelligence in the financial sector”, examining whether large language models (LLMs) behave in line with fundamental ethical norms when placed in simulated finance-related decision scenarios. The study finds significant heterogeneity across models and that only a minority choose an ethical course of action when no explicit constraints are imposed. The research asks LLMs to role-play as the chief executive officer of a financial intermediary and tests whether they would improperly appropriate client funds to repay corporate debts. After a baseline scenario, the analysis varies preferences and incentives, including risk tolerance, profit expectations and the regulatory framework, with responses broadly aligning with predictions from economic theory. The paper notes that simulation-based evaluations can support authorities responsible for ensuring LLM safety, but should be complemented by analysis of the models’ internal mechanisms, and it highlights the need for financial institutions to adopt an adequate governance framework for risks arising from LLM use.