In the absence of a federal law directly aimed at regulating artificial intelligence (AI), the Federal Trade Commission (FTC) is seeking to position itself as one of the primary regulators of this emergent technology through existing laws under the FTC’s ambit. As we recently wrote, the FTC announced the establishment of an Office of Technology, designed to provide technology expertise and support the FTC in enforcement actions. In a May 3, 2023 opinion piece published in the New York Times entitled “We Must Regulate A.I. Here’s How,” Lina Khan, the Chairperson of the FTC, outlined at least three potential avenues for FTC enforcement and oversight of artificial intelligence technology.

First, Chairperson Khan signals a focus on antitrust enforcement. She notes that a “handful of powerful businesses” control the “raw materials” needed to develop and deploy AI, such as data, cloud services, and silicon used for machine learning. The op-ed suggests that the FTC will be scrutinizing these businesses through the lens of antitrust laws to ensure they do not use market power to discriminate against downstream rivals. Additionally, the FTC may seek to use antitrust laws to regulate price-setting algorithms that can “facilitate collusive behavior” and potentially inflate consumer prices.

Second, Chairperson Khan cautions against use of AI that might be considered a deceptive act or practice under the FTC Act, such as using chatbots to generate spear-phishing emails, fake websites, or fake consumer reviews. Notably, Chairperson Khan suggests that the FTC may seek to hold “upstream firms” accountable for such conduct and not just the “fly-by-night scammers” using an AI tool. Earlier this year, the FTC cautioned against deceptive statements in connection with the advertising of AI tools. Chairperson Khan’s recent op-ed suggests a more aggressive approach in potentially seeking to hold the developers of generative AI tools liable for the conduct of malicious actors using the tools for improper means.

Third, Chairperson Khan highlights potential concerns with using large data sets to train AI. As the FTC has previously written, training algorithms on large data sets might implicate anti-discrimination laws (such as the Equal Credit Opportunity Act) if the outputs are discriminatory. Chairperson Khan’s recent op-ed also argues that the use of large data sets in AI might violate “existing authorities proscribing exploitative collection or use of personal data.” As we previously wrote, the FTC’s remedy of algorithmic disgorgement when training data for AI was collected or used without adequate authority. It should be noted, however, that the FTC’s authority to regulate the sale and use of personal data as an “unfair” act or practice under the FTC Act remains contested as indicated by a recent dismissal of an FTC complaint (with leave to amend) against a geolocation data broker. 

The recent emergence of generative AI raises a host of legal issues covering a wide array of industries, and the FTC is not the only federal agency attempting to regulate conduct in this area. Recently, a joint statement issued by Chairperson Khan together with the heads of the Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, and Department of Justice Civil Rights Division, highlighted their intent to monitor the use of automated systems for potential discrimination harms. However, while the White House Office of Science and Technology Policy has released non-binding guidance addressing the use of AI, and some state and local regulators are beginning to focus on automated decision making systems, there are no targeted federal laws governing the use of AI as of yet. Chairperson Khan’s recent op-ed shows how existing consumer protection laws might be used by the FTC to regulate in this field where innovation continues to outpace regulation.

Epstein Becker Green will be closely following these developments. For additional information about the issues discussed above, or if you have any other questions or concerns regarding the FTC, please contact the Epstein Becker Green attorney who regularly handles your legal matters, or one of the authors of this blog post. 

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.