Publicly Released: July 1, 2020
The objective of this audit was to determine the DoD’s progress in developing an Artificial Intelligence (AI) governance framework and standards and to determine whether the DoD Components implemented security controls to protect AI data and technologies from internal and external cyber threats.
On August 13, 2018, the FY 2019 National Defense Authorization Act (NDAA) directed the Secretary of Defense to designate a senior official to coordinate DoD efforts to develop, mature, and transition AI technologies into operational use. The FY 2019 NDAA defines AI as “any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.”
In June 2018, at the direction of the Deputy Secretary of Defense, the DoD Chief Information Officer (CIO) established the Joint Artificial Intelligence Center (JAIC) to facilitate AI governance, policy, ethics, and cybersecurity. In February 2019, the DoD published its AI Strategy, “Harnessing AI to Advance Our Security and Prosperity,” which directed the DoD to accelerate the adoption of AI to transform the future of the battlefield and speed with which the DoD responds to threats.
As of March 2020, the JAIC had taken some steps to develop an AI governance framework and standards, such as building the JAIC workforce, developing National Mission objectives, and adopting ethical principles. However, to ensure that the JAIC can meet the responsibilities outlined in the FY 2019 NDAA, DoD AI Strategy, and DoD guidance, the JAIC should also,
- include a standard definition of AI and regularly, at least annually, consider updating the definition;
- develop a security classification guide to ensure the consistent protection of AI data;
- develop a process to accurately account for AI projects;
- develop capabilities for sharing data;
- include standards for legal and privacy considerations; and
- develop a formal strategy for collaboration between the Military Services and DoD Components on similar AI projects.
We also identified that the four DoD Components and two contractors we reviewed did not consistently implement security controls to protect the data used to support AI projects and technologies from internal and external cyber threats. Specifically, the DoD Components and contractors did not consistently,
- configure their systems to enforce the use of strong passwords; generate system activity reports; or lock after periods of inactivity;
- review networks and systems for malicious or unusual activity;
- scan networks for viruses and vulnerabilities; and
- implement physical security controls, such as AI data
Without consistent application of security controls, malicious actors can exploit vulnerabilities on the networks and systems of DoD Components and contractors and steal information related to some of the Nation’s most valuable AI technologies. The disclosure of AI information developed by the DoD could threaten the safety of the warfighter by exposing the Nation’s most valuable advanced defense technology and causing the United States to be at a disadvantage against its adversaries.
We recommend that the JAIC Director establish an AI governance framework that, among other things, includes a standard definition of AI; a central repository for AI projects; a security classification guide; and a strategy for identifying similar AI projects and for promoting the collaboration of AI efforts across the DoD.
We also recommend that the Army, Marine Corps, Navy, and Air Force CIOs develop and implement a plan to correct the security control weaknesses related to using strong passwords; monitoring networks and systems for unusual activity; locking systems after inactivity, and implementing physical security controls.
Lastly, we recommend that the contracting officer for the Defense Threat Reduction Agency (DTRA), and the Strategic Capabilities Office (SCO) Security and Program Protection Director, in coordination with their DoD requiring activities, develop and implement a plan to verify that contractors correct the security control weaknesses identified in this report.
Management Comments and Our Response
The DoD CIO, responding for the JAIC Director, agreed to establish a biannual AI portfolio review with all DoD Components; a central repository for AI projects; legal and privacy standard operating procedures; and a strategy for collaboration by focusing on early and frequent interaction with users and Service program offices. The DoD CIO’s comments were not clear on the actions he will take to develop a standard definition for AI and a security classification guide. Therefore, the JAIC Director should provide additional comments on the final report addressing those recommendations.
The Cybersecurity and Information Assurance Director, responding for the Army CIO; the Deputy Commandant for Information, responding for the Marine Corps CIO; Associate Deputy CIO, responding for the Air Force CIO; and the DTRA Integration Division Director for Research and Development, agreed to develop and implement a plan to correct the security weaknesses we identified. Although the SCO Director, responding for the Security and Program Protection Director, agreed to update policies and conduct quarterly program reviews, he did not agree to all recommendations. Therefore, the Security and Program Protection Director should provide additional comments on the final report addressing that recommendation.
The Deputy Chief of Naval Operations, responding for the Navy CIO, stated that the Navy disagreed with the finding related to physical security. Although the Deputy Chief provided comments on the findings, he did not respond specifically to the recommendations and; therefore, we request that the Navy CIO provide comments on the final report that describe how he plans to address the recommendations.
This report is a result of Project No. D2019-D000CR-0132.000.