After a Congressional override of a Presidential veto, the National Defense Authorization Act became law on January 1, 2021 (NDAA). Notably, the NDAA not only provides appropriations for military and defense purposes but, under Division E, it also includes the most significant U.S. legislation concerning artificial intelligence (AI) to date: The National Artificial Intelligence Initiative Act of 2020 (NAIIA).
The NAIIA sets forth a multi-pronged national strategy and funding approach to spur AI research, development and innovation within the U.S., train and prepare an AI-skilled workforce for the integration of AI throughout the economy and society, and establish a pathway to position the U.S. as a global leader in the development and adoption of trustworthy AI in the public and private sectors. Importantly, the NAIIA does not set forth merely lofty goals, but rather, legislates concrete matters of critical importance for economic and national security.
With a new Administration in place, and increasing global competition to develop AI and related guidelines, this is undoubtedly a pivotal time in history. AI will continue to transform every industry and workplace, and every facet of our day-to-day lives. It is important to become familiar with the NAIIA and consider its long-term impact for society, including the legal and ethical ramifications if the goals are not met. To understand the legal, regulatory and business challenges associated with AI, all organizations should gain a better understanding of the NAIIA and keep apprised of developments as the newly formed governing bodies created under the NAIIA begin their work.
National AI Initiative
The NAIIA aims to achieve its goals through a Presidential National AI Initiative involving coordination among the civilian agencies, the Department of Defense and the Intelligence Community and by engaging the public and private sectors through various key activities, including, but not limited to:
- Funding, cooperative agreements, testbeds, and access to data and computing resources to support research and development;
- Educational and training programs to prepare the workforce to create, use, and interact with AI systems;
- Interagency planning and coordination of Federal AI research, development, demonstration, and standards engagement;
- Outreach to diverse stakeholders such as citizen groups, industry, civil rights and disability rights organizations for input on initiatives;
- Support for a network of interdisciplinary AI research institutes; and
- Support opportunities for international cooperation around AI research and development.
To drive toward these goals, the NAIIA mandates establishment of various governance bodies. First, on January 12, 2021, pursuant to the NAIIA, the White House Office of Science and Technology Policy (OSTP) established the National Artificial Intelligence Initiative Office (AI Initiative Office). Second, the NAIIA requires the creation of the Interagency Committee and various subcommittees on AI to coordinate federal activities and create a strategic plan for AI (including with regard to research and development, education and workforce training). Third, the law mandates that the Secretary of Commerce, in consultation with the Director of OSTP, Secretary of Defense, Secretary of Energy, Secretary of State, the Attorney General, and the Director of National Intelligence, establish a National Artificial Intelligence Advisory Committee comprised of appointed members representing broad and interdisciplinary expertise and perspectives to serve as advisors to the President and the Initiative Office on matters related to the AI Initiative. Fourth, the NAIIA also requires the Director of the National Science Foundation and the OSTP to establish the National AI Research Resource Task Force.
In particular, the National Artificial Intelligence Advisory Committee will advise on research and development, ethics, standards, education, security, AI in the workplace and consequences of technological displacement, and other economic and societal issues addressed by the Initiative. Further, the body will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights. The Committee will also establish a sub-committee related to AI in law enforcement and will address such issues as bias and proper usage of facial recognition, as well as data security and use of AI consistent with privacy, civil and disability rights.
National Academies AI Impact Study on the Workforce
The NAIIA also requires the National Science Foundation to contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding the current and future impact of AI on the U.S. workforce. This study will include input from various stakeholders in the public and private sectors, and result in recommendations regarding the challenges and opportunities presented. The study will address the impact of increased use of AI, automation and related trends on the workforce, the related workforce needs and employment opportunities, and the research gaps and data needed to address these issues. The results of the study will be published in a report to several Congressional Committees and available publicly by January 1, 2023.
Funding of AI Initiatives
A hallmark of the NAIIA is its commitment to inject the economy with funding to boost AI efforts. In total, the NAIIA pumps approximately $6.4 billion dollars into AI activities under the Initiative. This funding is earmarked in a variety of ways. For example, the National Institute of Standards and Technology (NIST) received Congress’ authorization to spend almost $400 million over 5 years to support development of frameworks for research and development best practices and voluntary standards for AI trustworthiness, including:
- Privacy and security (including for data sets used to train or test AI systems, and software and hardware used in AI systems);
- Computer chips and hardware designed for AI systems;
- Data management and techniques to increase usability of data;
- Development of technical standards and guidelines to test for bias in AI training data and applications;
- Safety and robustness of AI to withstand unexpected inputs and adversarial attacks;
- Auditing mechanisms;
- Applications of machine learning and AI to improve science and engineering; and
- Model and system documentation.
NIST will also work on the creation of a risk management framework, standardized data sets for AI training, partnership with research institutes to test AI measurement standards and develop data sharing best practices.
The National Science Foundation will receive almost $4.8 billion over 5 years to fund research and education in AI systems and related fields (including K-12, undergraduate and graduate programs), to develop and deploy trustworthy AI, workforce training and development of a diverse AI workforce pipeline. The National Oceanic and Atmospheric Administration will receive $10,000,000 in 2021 towards its AI Center. Subject to the availability of funding, the Director of the National Sciences Foundation will establish a program to award financial assistance for the planning and establishment of a network of AI Institutes for research and development and attainment of related goals as set forth under the NAIIA. These AI Institutes would be eligible to receive funding in order to manage and make available data sets for training and testing AI, develop AI testbeds, conduct specific research and education activities, provide or broker access to computing resources and technical assistance, and conduct other collaborative outreach activities.
There have been tremendous advancements in the development of AI in recent years, exponentially greater than experienced in the early days of its development in the last century. AI is no longer a matter of science fiction and it is quickly becoming a mainstream reality with a major impact on every aspect of our lives. Through passage of the NAIIA, the U.S. has demonstrated its commitment to responsibly investing in the future of AI, including preparing the public, industry and the future workforce for the new world that has arrived.
To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 - 4:00 p.m. (ET). To register, please click here.
- Member of the Firm
- Member of the Firm