Privacy and Security Law

The healthcare industry is still struggling to address its cybersecurity issues as 31 data breaches were reported in February 2019, exposing data from more than 2 million people.  However, the emergence of artificial intelligence (AI) may provide tools to reduce cyber risk.

AI cybersecurity tools can enable organizations to improve data security by detecting and thwarting potential threats through automated systems that continuously monitor network behavior and identify network abnormalities.  For example, AI may offer assistance in breach prevention by proactively searching and identifying previously unknown malware signatures.  By using historical data, these applications learn to detect malware issues even when such threats are not previously known. Utilizing these tools may prove more effective compared to conventional cybersecurity practices.

Recently, government agencies have endorsed the use of AI as having tremendous potential moving forward.  In December 2018, HHS launched a pilot that combined AI, automation, and blockchain technology.  This pilot was used to create cost savings as well as design better contracts while also ensuring sensitive data was encrypted and secured within a cloud-based system. Additionally, in January 2019, the Department of Health and Human Services’ shared services organization began building a contract vehicle, known as the Intelligent Automation/Artificial Intelligence (IAAI) contract, which offers “a host of automation and AI technologies and support services, including robotic process automation, machine and supervised learning and machine,” to help other agencies integrate AI technologies into their workflows.  Yet, certain lawmakers continue to express concern regarding appropriate and ethical use of AI.

Though AI is having a transformative effect on the healthcare industry relative to cybersecurity, there are still serious concerns regarding the technology.  First, some AI tools could be used maliciously by criminals to threaten digital and physical security.  External threats may train machines to hack systems at human or superhuman levels.  Secondly, organizations relying too heavily on AI may fail to hire sufficient specialized security personnel to properly manage and oversee cybersecurity operations.  For instance, a 2018 Ponemon report provided that 67 percent of IT and security professionals believed that automation was “not capable of performing certain tasks that the IT security staff can do” and roughly 55 percent believe automation cannot “replace human intuition and hands-on experience.”  Thus, poorly implemented and managed AI could result in greater risk.

Given the nascent state of AI in cybersecurity, entities should approach adoption of AI with caution.  Further, successful implementation and use of AI should be predicated on first establishing policies and procedures for managing cyberrisk.  Organizations should continue to maintain a team of highly skilled security personnel to oversee the implementation and use of AI tools and be on hand to make critical, real-time decisions where automation cannot resolve a cybersecurity issue.  O, brave new world….


Brian Hedgeman


Alaap B. Shah

Consumer privacy protection continues to be top of mind for regulators given a climate where technology companies face scrutiny for lax data governance and poor data stewardship.  Less than a year ago, California passed the California Consumer Privacy Act (CCPA) of 2018, to strengthen its privacy laws.  In many regards, the CCPA served as a watershed moment in privacy due to its breadth and similarities to the E.U. sweeping General Data Protection Regulation (GDPR) law.

Yet, California continues to push the envelope further.  Recently, California State Senator Jackson and Attorney General (AG) Becerra introduced a new bill (SB561) that will expand the consumer’s right to bring private lawsuits for violations of the CCPA. If passed, SB561 will: (1) provide for a private right of action for all CCPA violations—not just those stemming from a data breach; (2) eliminate the 30-day period for businesses to cure after receiving notice of an alleged violation; and (3) allow the AG to publish guidance materials for businesses instead of allowing businesses’ the option to seek specific opinions of the AG. Currently, the CCPA allows the AG office to bring action against business, in most instances, only allowing consumers to bring private action in instances of data breach resulting from a business’s failure to implement reasonable security measures. If SB561 is passed, the CCPA will materially expose businesses to private actions for damages applicable to other violations under the CCPA, including failure to provide consumers with proper notifications required under the CCPA.

These developments are just the tip of the iceberg.  Emboldened by California’s example, many other states are following suit. As such, businesses that implement an effective CCPA compliance program will likely position them to satisfy potential compliance obligations in other states moving forward.  For example, Colorado recently passed as sweeping law to protect patient privacy (HB18-1128), which went into effect September 1, 2018.  Colorado now requires covered entities (e.g., business entities that maintain, own, or licenses personal identifying information (PII) in the course of their business) to implement, and ensure that third-party service providers implement, reasonable security procedures and practices.  Additionally, the law requires covered entities to develop written policies and procedures concerning the destruction of paper and electronic documents that contain PII. Further, the law authorizes the AG to bring criminal prosecution against covered entities that violate the new rules.

Other states including Hawaii, Maryland, MassachusettsNew Mexico, New York, North Dakota, Rhode Island, and Washington are also using the CCPA and the GDPR as templates to perform similar overhaul of their privacy laws. As a result of this state law trend, businesses should closely monitor the legislative progress of these state bills.  Further, if businesses have not yet started shoring up their privacy and data security practices and programs, they had better do so in short order. It is likely that many of these state laws, if passed, will carry stiff penalties for noncompliance and may subject businesses to class actions.

In addition to these piecemeal state law efforts to strengthen privacy, the U.S. Chamber of Commerce is currently exploring whether a Federal consumer privacy protection law should be enacted.  It appears that the privacy tidal wave starting on California’s west coast is making its way eastward . . . .

 


Daniel Kim


Alaap B. Shah

One well-recognized way to protect patient privacy is to de-identify health data.  However, trends around increases in publicly-available personal data, data linking and aggregation, big data analytics, and computing power are challenging traditional de-identification models.  While traditional de-identification techniques may mitigate privacy risk, the possibility remains that such data may be coupled with other information to reveal the identity of the individual.

Last month, a JAMA article demonstrated that an artificial intelligence algorithm could re-identify de-identified data stripped of identifiable demographic and health information. In the demonstration, an algorithm was utilized to identify individuals by pairing daily patterns in physical mobility data with corresponding demographic data. This study revealed that re-identification risks can arise when a de-identified dataset is paired with a complementary resource.

In light of this seeming erosion of anonymity, entities creating, using and sharing de-identified data should ensure that they (1) employ compliant and defensible de-identification techniques and data governance principles and (2) implement data sharing and use agreements to govern how recipients use and safeguard such de-identified data.

De-identification Techniques and Data Governance

The HIPAA Privacy Rule (45 C.F.R. §164.502(d)) permits a covered entity or its business associate to create information that is not individually identifiable by following the de-identification standard and implementation specifications (45 C.F.R. §164.514(a)-(b)).

In 2012, the Office for Civil Rights (OCR) provided guidance  on the de-identification standards. Specifically, OCR provided granular and contextual technical assistance regarding (i) utilizing a formal determination by a qualified expert (the “Expert Determination” method); or (ii) removing specified individual identifiers in the absence of actual knowledge by the covered entity that the remaining information could be used alone or in combination with other information to identify the individual (the “Safe Harbor” method).

As publicly-available datasets expand and technology advances, ensuring the Safe Harbor method sufficiently mitigates re-identification risk becomes more difficult.  This is due to the fact that more data and computing power arguably increase the risk that de-identified information could be used alone or in combination with other information to identify an individual who is a subject of the information.

Given the apparent practical defects in the “Safe Harbor” method, many organizations are applying a more risk-based approach to de-identification through the use of the “Expert Determination” method.  This method explicitly recognizes that risk of re-identification may never be completely removed. Under this method, data is deemed de-identified if after applying various deletion or obfuscation techniques the “risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information . . . .”

In light of the residual risks associated with de-identified data generally, it is important that organizations continue to apply good data governance principles when using and disclosing such data.  These best practices should include: data minimization, storage limitation, and data security.  Organizations should also proceed with caution when linking data sets together in a manner that could compromise the integrity of the techniques used to originally de-identify the data.

Data Sharing and Use Agreements

Regardless of the de-identification approach, the lingering risk of re-identification can be further managed through contracts with third parties who receive such data.  Though not required by the Privacy Rule, an entity providing de-identified data to another party should enter into a data sharing and use agreement with the recipient.  Such agreements may include obligations to secure the data, prohibit re-identification of the data, place limitations on linking data sets, and contractually bind the recipient to pass on similar requirements to any downstream other party with whom the data is subsequently shared. Further, such agreements may include provisions prohibiting recipients from attempting to contact individuals who provided data in the set and may also include audit rights to ensure compliance.

The risk of re-identification may be a tradeoff to realize the vast benefits that sharing anonymized health data provides; however, entities creating, using and sharing de-identified data should doing so responsibly and defensibly.


Alaap B. Shah


Elizabeth Scarola

On February 11th, blockchain advocates, digital health enthusiasts, and patients received positive news from the Center for Medicare and Medicaid Services (“CMS”) and the Office of the National Coordinator for Health Information Technology (“ONC”) regarding patient data sharing.  These rules, taken together, seek to make data more liquid, which can promote patient access, continuity of care, research, collaboration across the industry and several other activities that previously faced challenges within a health care system built on data silos.

First, CMS published a proposed rule that seeks to increase interoperability and patient access to health records. CMS Administrator, Seema Verma, explained that the proposal seeks to “break down existing barriers to important data exchange needed to empower patients by giving them access to their health data.”  Second, ONC published a proposed rule aiming to deter and penalize information blocking.  As a result of lack of interoperability and information blocking, data sharing has been challenging across the industry and patients have historically struggled to gain access to their health records, which health providers and payors claimed they owned.  These proposed rules take notable steps to open avenues for data sharing and shift the role of patients with respect to their own health data.

The CMS proposed rule requires Medicare Advantage (“MA”) organizations, state Medicaid and Children’s Health Insurance Program (“CHIP”) Fee for Service (“FFS”) programs, Medicaid Managed Care Plans, CHIP managed care entities, and Qualified Health Plan (“QHP”) issuers in federally facilitated exchanges (“FFE”) to (1) provide convenient access to health care records to patients, (2) support the electronic exchange of data for transitions of care as patients move between the aforementioned plan types, and (3) require participation in trust networks to improve interoperability. Additionally, the proposed rule requires Medicare-participating hospitals, psychiatric hospitals, and Critical Access Hospitals (“CAHs”) to send electronic notifications when a patient is admitted, discharged, or transferred.

The ONC proposed rule establishes conditions for maintaining electronic health record (“EHR”) certification centered around preventing information blocking and developing technical methods for data sharing.  Specifically, health IT developers will be required to (1) attest not to engage in information blocking, (2) include application programming interfaces (API) in certified EHR technology, and develop common data export formats to allow for transitions of care, data sharing, and EHR switching.  It is also important to note that the proposed rule established seven explicit exceptions to the information blocking prohibition, including promoting privacy and security of health information.

These rules could serve as a watershed moment in terms of data ownership, sharing and patient access.  Yet, these rules could be disruptive to the way stakeholders in healthcare have historically operated relative to each other and the patients they serve.  In any case, the regulators have sent their message . . . the “walls” must come down and data ought to flow more freely.

CMS and ONC have requested that stakeholders provide comments within 60 days of issuance of the proposed rule.


Alaap B. Shah


Ebunola Aniyikaiye

There is a new kid on the block . . . the Chief Data Officer (CDO).  There is no surprise in our data-driven world that such a role would exist. Yet, many organizations struggle with defining the role and value of the CDO. Effective implementation of a CDO may be informed by other historical evolutions in the C-Suite.

Examining the rise of the Chief Compliance Officer (CCO) in the 2000’s mirrors some of the same frustrations that organizations faced when implementing the CCO role. While organizations were accustomed to having legal, HR, and internal audit departments working together to ensure compliance, suddenly CCOs stepped in to pull certain functions from those departments into the folds of the newly-minted Compliance department.  Integrating CDOs appears to follow a similar approach. Particularly in health care, the CDO role is still afloat, absorbing functionality from other departments as demand inside of organizations evolves and intensifies to focus on the financial benefits of their data pools.

Corporate evolution is challenging and often uncomfortable, but the writing is on the wall . . . there are two types of companies:  ones that are data-driven and ones that should be.  Which will you be?

What Is a Chief Data Officer?

CDO responsibilities will vary depending on the organization. Some organizations position the CDO to oversee data monetization strategies, which requires melding business development acumen with attributes of a Chief Information Officer. In some organizations, the CDO may oversee the collection of all of the company’s data in order to transform it into a more meaningful resource to power analytical tools.

A survey of CDO positions identified three common aspirations that organizations have for the role: Data Integrator, Business Optimizer, and Market Innovator. Data Integrators primarily focus on infrastructure to give rise to innovation. Business Optimizers and Market Innovators focus on optimizing current lines of business or creating new ones. These aspirations will likely vary depending on the nature and maturity of organizations. Regardless of the specific role, CDOs can help organizations bridge the widening gap between business development, data management, and data analytics.

Further, a key component of a CDO’s activity will relate to responsible data stewardship.  CDO activities will heavily depend on developing a data strategy that complies with legal, regulatory, contractual and data governance boundaries around data collection, use and disclosure.  CDOs should work closely with legal counsel and compliance personnel to effectively navigate these challenges.  Further discussion of the legal and regulatory landscape around data use is available here.

The Importance of CDOs in Transforming Healthcare Companies

It is clear that leveraging data will be key to innovating, gaining efficiencies, and driving down costs over time.  Yet, many organizations continue to struggle with making sense of the data they possess.   For some, the CDO may be a critical driving force to advance a business into a new landscape.  Just as the CCO helped address decades of frustration with corporate ethics and practices (and was soon demanded by lawmakers and regulators), the role of the CDO has emerged in response to demand for efficiencies in business practices and the recognition that data has become the world’s most valuable commodity.

In light of the explosion of data in the healthcare industry, organizations should consider whether and how a CDO will fit into the corporate structure. Furthermore, organizations should work to understand how having a person at the table with a keen eye towards giving life to an organization’s data resources can benefit the business long term from internal and external perspectives.  The ultimate question a CDO can help solve is:  What don’t we know that, if we knew, would allow our organization to innovate or operate more efficiently or effectively?


Alaap B. Shah


Andrew Kuder

Data is king!  A robust privacy, security and data governance approach to data management can position an organization to avoid pitfalls and maximize value from its data strategy. In fact, some of the largest market cap firms have successfully harnessed the power of data for quite some time.  To illustrate this point, the Economist boldly published an article entitled “The world’s most valuable resource is no longer oil, but data.”  This makes complete sense when research shows that 90% of all data today was created in the last two years, which translates to approximately 2.5 quintillion bytes of data per day.

This same trend has taken hold in the healthcare industry as it seeks to rapidly digitize and learn from data in order to bend the cost curve down, increase quality of outcomes, and improve overall population health.  Specifically, there is certainly an ever-growing pool of health data being generated by providers, payors, life sciences companies, digital health companies, diagnostic companies, laboratories, and a cornucopia of other entities.  Recent estimates indicate that volume of healthcare data is growing rapidly as evidenced by 153 exabytes produced in 2013 and an estimated that 2,314 exabytes will be produced in 2020.  This translates to an overall rate of increase at least 48 percent annually.  But, to what end?

The rapid production and aggregation of data is being met with increasing demand to access and analyze this data for a variety of purposes.  Life sciences companies want access to conduct pre-market analysis, clinical trials and post-market surveillance.  Providers want access to conduct population health research.  AdTech and marketing companies want it to . . . you guessed it . . . sell more things.  These examples are just the tip of the proverbial iceberg when it comes to the secondary data analytics market.

Nevertheless, there are various issues that must be addressed before aggregating, sharing, and using such data.

First and foremost, identifiable health data is typically treated as a sensitive class of information warranting protection.  As such, entities should consider whether their intended activities must comply with applicable privacy and security regulations.  Depending on the data being collected, the use and disclosure of such data, and the jurisdictions within which data is stored and processed, entities may be subject a wide array of legal obligations, including one or more of the following:

  • Health Insurance Portability and Accountability Act of 1996 (“HIPAA”)
  • the Common Rule
  • the EU General Data Protection Regulation (“GDPR”)
  • 42 C.F.R. Part 2
  • State data protection and breach laws and regulations
  • Food and Drug Administration (“FDA”) regulations; or
  • Federal Trade Commission (“FTC”) regulation.

Second, entities must consider contractual obligations, including property rights governing data collection, aggregation, use, and disclosure.  The contractual obligations that should be evaluated will depend largely on the nature of the data collected, contemplated uses and disclosures of such data and the applicable laws and regulations relative to such collection, use and disclosure.  Accordingly, entities should also consider the impact of upstream agreements and downstream agreements on rights to collect, use or disclosure data through the chain of custody.  Agreements that warrant considering may include:

  • Master Services Agreements
  • Data Use Agreements
  • Business Associate Agreements
  • Data Sharing Agreements
  • Confidentiality/Non-disclosure Agreements
  • Terms of Use/Privacy Policies (and other representations made to consumers).

Third, even if collection, aggregation and analysis is possible under law/regulation and contract, companies must still consider whether additional data governance principles should be implemented to guide responsible data stewardship.  It is critical to remember that businesses that mishandle personal data can lose the trust of customers and suffer irreparable reputational harm. To mitigate against such issues, entities should consider developing data governance principles guided by fair information practices including:  openness/transparency, collection limitation, data quality, purpose specification/use limitation, accountability, individual participation and data security.


Patricia M. Wagner


Alaap B. Shah

Recently, the U.S. Department of Health & Human Services (“HHS”) issued guidance for healthcare cybersecurity best practices.  As required under the Cybersecurity Act (CSA) of 2015, this four-part guidance was generated by a Task Group charged with the following:

  1. Examining current cybersecurity threats affecting the healthcare and public health sector;
  2. Identifying specific weaknesses that make healthcare and public health organizations more vulnerable to cybersecurity threats; and
  3. Providing certain practices that cybersecurity experts rank as most effective against such threats.

This technical assistance comes at a critical time.  Healthcare organizations, regardless of size, complexity or sophistication are vulnerable to cyber-attacks. For example, while smaller organizations may think that cyber threats, such as ransomware, tend to affect the larger organizations, approximately 58% of malware attack victims affect small businesses. Furthermore, cybersecurity attacks in 2017 cost small and medium-sized businesses an average of $2.2 million.

Most surprisingly, despite increased frequency of cyber-attacks over the last two years, coupled with cost of data breaches being highest in healthcare, the healthcare industry continues to lag behind in cybersecurity preparedness. About 4-7% of total IT budgets, across healthcare organizations, are being spent on cybersecurity, while other industries spend approximately 10-14%.  There is certainly a need and significant room for improvement across the industry.

The main volume of the new HHS guidance document cites the five most prevalent cybersecurity threats as:

  • E-mail phishing attacks;
  • Ransomware attacks;
  • Loss or theft of equipment or data;
  • Insider, accidental or intentional data loss; and
  • Attacks against connected medical devices that may affect patient safety.

The guidance document also shares ten best practices to mitigate cybersecurity threats (covered in more detail in corresponding Technical Volumes):

  • E-mail protection systems;
  • Endpoint protection systems;
  • Access management;
  • Data protection and loss prevention;
  • Asset management;
  • Network management;
  • Vulnerability management;
  • Incident response;
  • Medical device security; and
  • Cybersecurity policies.

With this new cybersecurity guidance from HHS, healthcare companies can be better equipped to strengthen their security and more effectively tackle cyber threats.  Companies should prioritize these efforts because cybersecurity preparedness can reduce patient privacy risk, protect patient safety and ultimately preserve an organization’s reputation.


Alaap B. Shah


Daniel Kim

On October 18, 2018, the FDA published Content of Premarket Submissions for Management of Cybersecurity in Medical Devices.  This guidance outlined recommendations for cybersecurity device design and labeling as well as important documents that should be included in premarket approval submissions.  This guidance comes at a critical time as the healthcare industry is a prime target for hackers.  On January 22, 2019, the U.S. Department of Homeland Security Industrial Control System Cyber Emergency Team (US-CERT) issued another advisory regarding medical device vulnerabilities.  Further, a report by KLAS Research in collaboration with the College of Healthcare Information Management Executives (CHIME) found that 18 percent of healthcare organizations reported that their medical devices were hit by malware or ransomware.  Many experts are also projecting that more cyber-attackers will target devices in 2019.

The FDA has recognized cybersecurity risk related to medical devices for quite some time, and has taken this step to further protect patients from such risks.  Other organizations have also taken aim at this issue, such as the National Institute of Standards and Technology (NIST) issuing guidance related to telehealth monitoring devices.  However, medical device manufacturers may continue to struggle to address these risks in design, development and implementation.  As a result, with Internet of Things (IoT)-enabled device innovation continuing to expand and the expectation of new threats, it is imperative that medical device consumers and manufacturers keep pace to ensure device network security.

There are several complexities that exist relative to securing medical devices. First, many devices no longer function as stand-alone components in healthcare settings as they are being integrated into the health care IoT.  Second, an increasing number of medical devices are network-connected and transmitting sensitive patient data through other wired or wireless components.  These two factors create quality improvements, convenience and flexibility to physicians and patients, but they can also introduce new security vulnerabilities that could adversely affect clinical operations as well as put patients at risk.

The FDA guidance addresses a number of key areas of risk.  In particular, the guidance recognized vulnerabilities stemming from insufficient access control safeguards medical devices.  For instance, administrators often assign the same password to multiple devices, which could provide unauthorized access to each device and its data.  Additionally, the FDA noted that data transmitted through the devices is not always encrypted, which could allow unauthorized individuals to intercept and even modify clinical information impacting patients’ privacy and/or safety.  Finally, a number of devices are vulnerable to malware without the ability to apply security patches.

To reduce risk, there are several measures that can be implemented to enhance device security.   For instance, hospitals and health systems should include medical devices in security risk analyses and risk management plans. Additionally, organizations should thoroughly evaluate security risks related to devices and vendors before purchasing devices (e.g. request disclosure of device cybersecurity properties).  As for device manufacturers, enhanced security systems should be baked into devices to monitor device networks and ensure device authorization is limited to assigned authorized users.

EBG will continue to keep an eye on how the industry reacts and implements the FDA’s guidance over time.


Brian Hedgeman


Alaap B. Shah

According to a report by West Monroe Partners, approximately 40% of companies engaged in corporate transactions reported finding a cybersecurity issue during post-acquisition integration of the target company.  While companies routinely conduct robust transactional due diligence to manage legal risk, many fail to adequately conduct cybersecurity due diligence. As a consequence, many companies and investors are leaving themselves vulnerable to potentially severe latent cyber risks.

Cybersecurity is especially relevant in healthcare transactions as the industry continues to be riddled with cyber-attacks.  Protenus Breach Barometer reports that healthcare has been the most targeted industry over the last few years, with 1.13 million, 3.15 million, and 4.4 million patient records compromised in the first three quarters of 2018, respectively, and more than half of breaches occurring due to hacking.  The cat is out of the bag.  Healthcare entities usually amass very lucrative personal data – social security numbers, demographic information, health insurance records, and prescription information – making them attractive targets for hackers.

Despite the high frequency of cyber-attacks in the industry, many healthcare entities spend only half as much to improve security protections when compared to other industries.  As a result, these companies remain vulnerable to cyber threats.  In the case of a breach, companies could face penalties from government agencies as well as class action lawsuits. Cyber risks may intensify during acquisitions, as the likelihood of a breach increases with the expansion of the overall cyber footprint.  Further, in a transaction, the target company’s vulnerabilities ultimately become an issue for the acquiring company.  Thus, if the target entity does not have adequate safeguards to protect patient records, then the acquiring company is at financial and reputational risk for those failings.

Given the potential risks, it is important that acquiring companies prioritize cybersecurity as an integral part of due diligence efforts.  An effective due diligence process should at a minimum evaluate cybersecurity preparedness and risks related to the following: 1) current state of risk assessment; 2) technical security features of business critical information systems and network architecture; 3) implementation of policies and procedures related to information security; 4) policies and procedures related to detecting, responding to, and recovering from cyber incidents; and 5) historical indicators of legal and regulatory compliance issues related to cybersecurity.


Alaap B. Shah


Eric W. Moran


Brian Hedgeman

As 2019 begins, companies should seriously consider the financial and reputational impacts of cyber incidents and invest in sufficient and appropriate cyber liability coverage. According to a recent published report, incidents of lost personal information (such as protected health information) are on the rise and are significantly costing companies. Although cyber liability insurance is not new, many companies lack sufficient coverage. RSM US LLP, NetDiligence 2018 Cyber Claims Study (2018).

According to the 2018 study, cyber claims are impacting companies of all sizes with revenues ranging from less than $50 million to more than $100 billion.  Further, the average total breach cost alone is $603.9K. This does not include crisis services cost (average $307K), the legal costs (defense = $106K; settlement = $224K; regulatory defense = $514K; regulatory fines = $18K), and the cost of business interruption (all costs = $2M; recovery expense = $957K).  In addition to these financial costs, reputational impact stemming from cyber incidents can materially set companies back for a long-period of time after the incident.

Companies can reduce risk associated with cyber incidents by developing and implementing privacy and security policies, educating and training employees, and building strong security infrastructures.  Nevertheless, there is no such thing as 100% security, and thus companies should consider leveraging cyber liability insurance to offset residual risks.  With that said, cyber liability coverages vary across issuers and can contain many carve outs and other complexities that can prevent or reduce coverage.  Therefore, stakeholders should review their cyber liability policies to ensure that they understand the terms and conditions of such policies. Key items to evaluate can include: coverage levels per claim and in the aggregate, retention amounts, notice requirements, exclusions, and whether liability arising from malicious third party conduct are sufficiently covered.

While cyber liability insurance will not practically reduce risk or a cyber incident, it is increasingly a critical component of a holistic risk mitigation strategy given the world we live in.


Alaap B. Shah


Daniel Kim