The Bank of Italy published the research paper “Chat Bankman-Fried? An Exploration of LLM Alignment in Finance”, examining the “alignment problem” in large language models (LLMs) by testing whether they act consistently with fiduciary duty in simulated financial decision-making. The study prompts various LLMs to impersonate the CEO of a financial institution and tests their willingness to misappropriate customer assets to repay corporate debt. After assessing a baseline scenario, the authors adjust the models’ stated preferences and incentives, finding significant heterogeneity in baseline behaviour and responses to changes in risk tolerance, profit expectations, and regulation that match predictions from economic theory. The paper argues that simulation-based testing can help regulators assess LLM safety, but should be complemented by analysis of internal LLM mechanics, and it highlights the need for appropriate risk-governance frameworks for LLM use within financial institutions.