On October 30, 2023, President Joe Biden signed the first ever Executive Order (EO) that specifically directs federal agencies on the use and regulation of Artificial Intelligence (AI). A Fact Sheet for this EO is also available.
This EO is a significant milestone as companies and other organizations globally grapple with the trustworthy use and creation of AI. Previous Biden-Harris Administration action on AI have been guidance of principles (e.g., the AI Bill of Rights) or have been targeted guidance on a particular aspect of AI such as the Executive Order Addressing Racial ...
Hardly a day goes by when we don’t see some media report of health care providers experimenting with machine learning, and more recently with generative AI, in the context of patient care. The allure is obvious. But the question is, to what extent do health care providers need to worry about FDA requirements as they use AI?
This post explores how bias can creep into word embeddings like word2vec, and I thought it might make it more fun (for me, at least) if I analyze a model trained on what you, my readers (all three of you), might have written.
Often when we talk about bias in word embeddings, we are talking about such things as bias against race or sex. But I’m going to talk about bias a little bit more generally to explore attitudes we have that are manifest in the words we use about any number of topics.
Would it surprise you if I told you that a popular and well-respected machine learning algorithm developed to predict the onset of sepsis has shown some evidence of racial bias? How can that be, you might ask, for an algorithm that is simply grounded in biology and medical data? I’ll tell you, but I’m not going to focus on one particular algorithm. Instead, I will use this opportunity to talk about the dozens and dozens of sepsis algorithms out there. And frankly, because the design of these algorithms mimics many other clinical algorithms, these comments will be applicable to clinical algorithms generally.
In the absence of a federal law directly aimed at regulating artificial intelligence (AI), the Federal Trade Commission (FTC) is seeking to position itself as one of the primary regulators of this emergent technology through existing laws under the FTC’s ambit. As we recently wrote, the FTC announced the establishment of an Office of Technology, designed to provide technology expertise and support the FTC in enforcement actions. In a May 3, 2023 opinion piece published in the New York Times entitled “We Must Regulate A.I. Here’s How,” Lina Khan, the Chairperson of the FTC, outlined at least three potential avenues for FTC enforcement and oversight of artificial intelligence technology.
On February 17, 2023, the Federal Trade Commission (“FTC”) announced the creation of the Office of Technology (the “OT”), which will be headed by Stephanie T. Nguyen as Chief Technology Officer. This development comes on the heels of increasing FTC scrutiny of technology companies. The OT will provide technical expertise and strengthen the FTC’s ability to enforce competition and consumer protection laws across a wide variety of technology-related topics, such as artificial intelligence (“AI”), automated decision systems, digital advertising, and the collection and sale of data. In addition to assisting with enforcement matters, the OT will be responsible for, among other things, policy and research initiatives, and advising the FTC’s Office of Congressional Relations and its Office of International Affairs.
The success of an artificial intelligence (AI) algorithm depends in large part upon trust, yet many AI technologies function as opaque ‘black boxes.’ Indeed, some are intentionally designed that way. This charts a mistaken course.
Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.
But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools ...
The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in ...
After a Congressional override of a Presidential veto, the National Defense Authorization Act became law on January 1, 2021 (NDAA). Notably, the NDAA not only provides appropriations for military and defense purposes but, under Division E, it also includes the most significant U.S. legislation concerning artificial intelligence (AI) to date: The National Artificial Intelligence Initiative Act of 2020 (NAIIA).
The NAIIA sets forth a multi-pronged national strategy and funding approach to spur AI research, development and innovation within the U.S., train and prepare an ...
On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations. CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse. The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.” The RFI follows last month’s White House ...
The healthcare industry is still struggling to address its cybersecurity issues as 31 data breaches were reported in February 2019, exposing data from more than 2 million people. However, the emergence of artificial intelligence (AI) may provide tools to reduce cyber risk.
AI cybersecurity tools can enable organizations to improve data security by detecting and thwarting potential threats through automated systems that continuously monitor network behavior and identify network abnormalities. For example, AI may offer assistance in breach prevention by proactively ...
- Warning - Transaction Delays Expected. State Notice Requirements Ahead for Health Care M&A!
- New York Aims to Bolster Hospital Cybersecurity with Imminent Release of Proposed Regulations
- Sharing Scientific Information with HCPs on Unapproved Uses of Medical Products: Dos and Don’ts Under FDA’s New Draft Guidance
- Abortion Rights to Be Codified in Ohio State Constitution
- The Guiding an Improved Dementia Experience (“GUIDE”) Model