Anthropic’s new initiative—“Project Glasswing,” announced in April 2026—reflects a significant development in the cybersecurity landscape that should command the immediate attention of every C-suite leader, privacy officer, information security professional, and compliance executive in health care and life sciences, financial services and other critical infrastructure industries, and their legal counsel.

Project Glasswing is a coalition of leading technology and cybersecurity providers united around a single urgent objective: deploying frontier artificial intelligence (AI) capabilities using Anthropic’s unreleased Mythos Preview AI model for defensive cybersecurity before malicious actors can exploit similar capabilities offensively to attack first party and open source software. 

The Mythos Preview model and similar autonomous AI capabilities in current and future tools are transforming the cybersecurity risk landscape given the rapid recent development in and accessibility of AI. The Mythos Preview model was able to detect thousands of critical, previously unknown security vulnerabilities, including flaws in every major operating system and web browser. These systems are essential to our interconnected electronic systems and ability to communicate securely. Some of those vulnerabilities, according to Anthropic, had survived undetected for decades.

That model is now being deployed as part of Project Glasswing to a select group of organizations under carefully controlled conditions to protect the world's most important and foundational software. The initiative explicitly acknowledges, however, that if these capabilities are not harnessed for defense now, they could be weaponized against critical infrastructure—including health care, financial services, and the Internet. Although AI-driven threat detection and other defensive platforms are well-established solutions, Project Glasswing foreshadows a new era where autonomous AI becomes a potentially omnipotent weapon in the wrong hands.

All critical infrastructure organizations should reassess their cybersecurity governance models and information risk frameworks and processes to ensure that they remain legally compliant to address the impact of AI on the cyberthreat landscape. Although AI technologies are changing in ways that are ever-more powerful and impactful, existing legal obligations are in place to guide and require organizational compliance measures to protect sensitive data and communications. 

The AI Threat Landscape Has Been Rapidly Evolving

As we previously analyzed here, Anthropic itself reported the first large-scale cyberattack executed without substantial human intervention—a fully automated campaign targeting technology companies, financial institutions, manufacturing, and government agencies. This event showcased the convergence of multi-modal AI and agentic AI to launch a sophisticated and alarming automated cyberattack. Specifically, the AI-powered cyberattack capabilities allowed the agentic AI-enabled threat actor to execute 80 to 90 percent of tactical operations independently across reconnaissance, vulnerability discovery, exploitation, lateral movement, and data exfiltration. The FBI also reported that in 2025 there were 22,000 complaints reporting AI-related information, resulting in nearly $900 million dollars lost. We anticipate losses attributable to AI attacks against our employment, health care, technology, and other critical infrastructure clients to grow exponentially as AI tools are increasingly used against institutions and become more available to a wider range of attackers. 

Such attackers may be foreign. On April 23, 2026, the Executive Office of the President, Office of Science and Technology Policy issued a memorandum for the heads of executive departments and agencies warning them of threats from foreign entities engaged in “deliberate, industrial-scale campaigns” to attack U.S. frontier AI systems, “leveraging tens of thousands of proxy accounts to evade detection[.]” These methods can be applied to virtually any institution and should be anticipated and proactively addressed.

The legal and operational implications for health care, financial services and technology organizations are severe. Health care systems, for example, are increasingly digital, interconnected, and powered by a complicated supply chain of vendors and technology. This sprawling digital ecosystem powered by sensitive patient information represents the kind of high-value, complexity-rich environment that agentic attackers are designed to exploit. Although many organizations seek to timely patch vulnerabilities as part of good cybersecurity hygiene practices, the current speed and cadence of such patching may represent a gap in the age of AI.

Of course, an organization cannot patch what it is not aware of, which is one of the chief concerns highlighted by Project Glasswing. Current practices may not be enough because the time between discovering a vulnerability and exploiting a vulnerability is shrinking to the point where AI may empower the immediate exploitation of previously undiscovered—so called “zero day” -- vulnerabilities, which have historically led to some of the largest potential data breaches (e.g, Log4j). At the same time, the movement toward companies increasingly leveraging AI for business reasons necessarily means a greater reliance on connectivity of systems and data and a larger attack surface for AI tools to potentially exploit.  

In addition to software vulnerabilities, we have highlighted previously here and here that AI has become increasingly sophisticated in social engineering and other identity-based exploitations, including to allow attackers to exploit biometric authentication mechanisms through use of deepfakes that synthesize voice, facial, and behavioral data. These multi-modal AI-powered attacks are leading to increased spoofing of biometric identity verification, which is a common multi-factor authentication method in health care and financial services.

Clearly, the types and volume of cyber risks to organizations are escalating in a new era where AI becomes prevalent and accessible.

Existing Legal and Regulatory Standards Require Consideration of AI Risks

Organizations that collect and process protected data, including health, financial, employment and other identifying information, operate under overlapping state and federal cybersecurity and privacy requirements. As we have discussed in many of our prior writings, these existing legal obligations, such as the NYSHIELD Act, the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act (GLBA), and the NYS Department of Financial Services (NYDFS) Cybersecurity Regulation, among many others, generally require that organizations act reasonably to address anticipated cybersecurity threats. Health care and life sciences organizations, for example, operate under layers of cybersecurity regulatory obligations that explicitly require risk-based safeguards responsive to the evolving threat environment. HIPAA’s Security Rule demands administrative, physical, and technical safeguards calibrated to current risk. The HITECH Act amplifies breach consequences. State frameworks — including the NY SHIELD Act, NYSDFS Cybersecurity Regulation, and California’s Privacy Protection Agency (CPPA) regulations—impose risk-based information security program requirements on their covered organizations.

These regulatory regimes require organizations to consider risks from emerging cybersecurity threats. As noted above, AI-powered effective attacks are no longer emerging; they have arrived and organizations need to consider appropriate countermeasures to thwart these threats under the regulatory frameworks. Recognized security practices, such as those described in the National Institute of Standards and Technology (NIST) Cybersecurity Framework 2.0 and NIST’s AI Risk Management Framework, merit renewed emphasis in light of evolving threat landscapes and technological advancement. Accordingly, boards and executive leadership should ask whether their existing safeguards are sufficient and what role AI should play in proactive cybersecurity efforts, and risk assessments should contemplate agentic and other AI attack vectors (such as deepfakes and identity-based attacks). For example, organizations conducting routine HIPAA Security Rule risk analyses should consider accounting for AI-augmented vulnerability exploitation in order to assess and mitigate associated risks, because regulators will likely expect organizations to take reasonable steps to prepare for these foreseeable and documented AI threats.  

Key Next Steps for Health Care and Life Sciences, Financial Services, and other Critical Infrastructure Organizations

Some steps that organizations that operate or maintain critical software infrastructure should take include:

  • Maintaining a written information security program that evaluates risks associated with AI-specific cyber threats in accordance with recognized frameworks and guidance —including the Open Worldwide Application Security Project (OWASP) Agentic AI and LLM Top 10 frameworks, NIST AI Risk Management guidance, and International Organization for Standardization/Internal Electrotechnical Commission (ISO/IEC) risk management guidance.
  • Evaluating biometric systems, multi-factor authentication (MFA) implementations, and identity verification workflows against the risk from multi-modal AI attack scenarios as part of routine risk assessments. Behavioral biometrics and continuous authentication architectures—rather than static biometrics alone—may provide meaningfully stronger defenses against AI-synthesized identity spoofing targeting clinical, administrative and workforce systems.
  • Funding automated AI-driven vulnerability detection and response tools. The defensive AI tools being made available through Project Glasswing may help identify complex vulnerabilities that prior-generation automated tools consistently miss. Organizations should accelerate procurement evaluation and deployment of AI-augmented vulnerability detection, particularly for legacy systems managing Protected Health Information and clinical operations, as well as any other sensitive or personal data subject to existing regulations.
  • Revising incident response playbooks to prepare for autonomous AI-driven attacks.
  • Sensitizing and training their workforce to AI-powered threats, particularly as to social engineering, identity spoofing and business email compromises.
  • Hiring trained cybersecurity professionals that can effectively manage and document the organization’s defensive measures and risk judgments as to AI.
  • Managing supply chain risk through effective and tailored contractual arrangements. Business Associate Agreements and technology vendor contracts should be reviewed for AI cybersecurity representations, breach notification obligations, and indemnification provisions that contemplate AI-augmented threat scenarios.

The Strategic and Legal Imperative: Defense Cannot Wait

For health care and life sciences organizations, financial services, and other critical infrastructure organizations, the calculus is both legal and existential. Patient data, sensitive financial data, clinical operations, intellectual property, employee data and medical device integrity are all at stake. Project Glasswing is a signal that the cybersecurity industry recognizes it is in a race with no finish line in sight. Organizational leadership should recognize that they are in that race too — whether they choose to be or not.

The organizations that engage now with AI-powered defensive capabilities, modernize their risk frameworks, and partner with the right cybersecurity minded stakeholders will be materially better positioned to face the impending risks that an AI-powered threat landscape poses.  Those that wait for the threat to actually materialize to attack their organizations may find themselves explaining to regulators, plaintiffs, consumers and patients why they did not act reasonably when the warning signs were this clear. These measures to address this new era should be solidly grounded in existing legal obligations and frameworks to provide the best defense to addressing and responding to AI-powered attacks.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.