The New York State Department of Financial Services (DFS) recently released a letter, “Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks”, which responded to inquiries about Artificial Intelligence (AI) and cybersecurity. While the letter is directed at Financial Institutions (FIs) regulated by the DFS, all FIs can benefit from insight into how AI allows criminals to “…commit crimes at greater scale and speed…”. The following highlights from the letter are designed to potentially assist institutions in strengthening their cybersecurity posture by enhancing threat detection and improving incident response strategies.
Other State and Federal regulators are expected to weigh in on the AI and cybersecurity dynamic. In the meantime, forward-thinking FIs can use this letter as a starting point for identifying and combating AI threats.
While the letter did not offer an exhaustive review, it emphasized four specific and very concerning AI-related cyber risks (two external and two internal), and outlined six potential controls:
External Threats:
- AI-Enabled Social Engineering – The ability of threat actors to create highly personalized and more sophisticated content that is more convincing than historical social engineering attempts.
- AI-Enhanced Cybersecurity Attacks – The ability of threat actors to amplify the potency, scale, and speed of existing types of cyberattacks.
Internal Threats:
- Exposure or Theft of Vast Amounts of Nonpublic Information – AI engines developed or deployed internally require the collection and processing of substantial amounts of data, often including NPI and biometric data. This gives threat actors a greater incentive to target those entities.
- Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies – All FIs, and particularly smaller ones, depend heavily on third-party service providers (TPSPs), who in turn depend on sub-service providers. Each link in this supply chain introduces potential security vulnerabilities that can be exploited by threat actors.
Controls to Help Combat AI Threats:
- Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans – should address AI-related risks in the following areas:
- the organization’s own use of AI,
- the AI technologies utilized by TPSPs and vendors,
- any potential vulnerabilities stemming from AI applications that could pose a risk to the confidentiality, integrity, and availability of the Covered Entity’s Information Systems or NPI, and
- the incident response, business continuity, and disaster recovery plans should be reasonably designed to address all types of cybersecurity events and other disruptions, including those relating to AI.
- Third-Party Service Provider and Vendor Management – should include guidelines for conducting due diligence before an institution uses a third-party that will access its Information Systems and/or NPI.
- Access Controls – designed to prevent threat actors from gaining unauthorized access to a Covered Entity’s Information Systems and the NPI maintained on them. Institutions must periodically, but at a minimum annually, review access privileges to ensure each Authorized User only has access to NPI the Authorized User needs to perform their job functions.
- Cybersecurity Training – conduct at minimum annual cybersecurity awareness training that includes social engineering training for personnel, including senior executives and Senior Governing Body (i.e. Board) members that is enhanced for awareness of “deep fakes” and more sophisticated AI types of phishing emails.
- Monitoring – must have a monitoring process in place that can identify new (internal and external) security vulnerabilities promptly, so remediation can occur quickly.
- Data Management – implement data minimization practices as FIs must dispose of NPI that is no longer necessary for business operations or other legitimate business purposes, including NPI used for AI purposes. As of November 1, 2025, institutions should also maintain and update data inventories as they are crucial for assessing potential risks and ensuring compliance with data protection regulations.
The challenges associated with AI-enabled social engineering and cybersecurity attacks demonstrate the heightened capabilities of modern threat actors. Internally, the risk of data exposure and increased vulnerabilities through third-party dependencies highlights the complexities and interlinked nature of today’s digital ecosystems. The DFS letter not only underscores the urgency with which financial institutions must approach AI-related risks but also outlines pragmatic controls to combat those risks.
Although directed specifically at institutions regulated by the DFS, the insights offered are pertinent to all financial entities, providing a foundational framework to enhance cybersecurity measures in the face of rising AI threats.
In addition to this blog summary, our compliance team has also identified all instances where the recent NYDFS letter specifies that institutions “must” take action. Proactive institutions can use this as a checklist to guide the development of their AI risk mitigation strategy.
Get your copy here.