In remarks published by the Federal Reserve Board from a Financial Stability Oversight Council roundtable on cybersecurity and risk management, Vice Chair for Supervision Michelle Bowman set out a supervisory approach to banks' use of artificial intelligence that prioritizes material financial risk and seeks not to impede adoption of new tools. She highlighted that the Federal Reserve, the Office of the Comptroller of the Currency, and the Federal Deposit Insurance Corporation recently amended model risk management guidance to clarify that it does not apply to generative or agentic AI. The revised guidance now applies only to traditional models and basic AI applications, with other governance and risk-management practices expected to support use of newer AI tools. Bowman also said the Fed is working to update and simplify third-party risk-management guidance to better reflect actual and future risk. On cybersecurity, she pointed to Anthropic's Mythos model as an example of technology that can both accelerate vulnerability detection and enable malicious exploitation, and noted that Secretary Bessent and Chair Powell had convened the largest banks in April to discuss its implications. Separately, Bowman said the Financial Stability Board's Standing Committee on Supervisory and Regulatory Cooperation is preparing a report for stakeholder comment on sound practices for AI adoption, use, and innovation. A consultation draft is expected in the third quarter and is intended to cover both benefits and challenges, including whether greater international consistency is needed in AI expectations for cybersecurity and critical infrastructure.