Building Trust Ahead of Emerging Requirements for Safe and Secure Use of Artificial Intelligence

Client Alert

By Bob Kolasky, Senior Vice President of Critical Infrastructure, Exiger

President Biden’s landmark Executive Order on Artificial Intelligence (EO 14110) marks the first significant policy-making effort by the U.S. government on the important topic of how best to manage the opportunity — and risk — of artificial intelligence. 

Presidential executive orders can be viewed somewhat simply.  They are very public directions from the leader of the Executive Branch (the president) to his direct reports (generally the Cabinet secretaries) as to what he expects from them. In other words, they are a management tool for the president to set the direction of his administration and to hold his staff accountable. 

“My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so.  The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.”

President Joe Biden, Executive Order 14110

What then to make of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence?  At 88 pages, it means that President Biden is keeping his direct reports busy.  And, with its very public rollout, it means that the boss considers this a pretty significant priority.  It also means that the administration isn’t waiting for Congress to set national direction for artificial intelligence use — nor does it necessarily want to be beholden to what the Congress comes up for. Instead, it’s clearly a bold attempt by the administration to govern the use of artificial intelligence even as the contours of that use, as well as associated risks, are still being developed. There is also a significant element of the government using its combined authority around convening, regulation, tone setting and procurement to set those contours. 

As a supply chain software company with award-winning AI, Exiger generally supports policies to better define how AI will be deployed in a trustworthy manner while recognizing the technology’s potential.   Policymaking is not an easy task, however, and it is important the government be transparent, inclusive and evidence-based in its efforts. Here are a few areas that caught our attention in the EO:

Government Use of AI

  • Establishment of government leadership for AI.  The EO establishes an overall government policy lead for the issue, a role to be played by Deputy Chief of Staff Bruce Reed, as well as a governance structure in the Executive Office of the President and agency requirements for EO leads. Pinning the rose on leaders and establishing enterprise governance is something that enhances the sustained attention that AI will get within government and is a precedent that private sector organizations should consider following. 
  • Government’s requirements to use and contract for AI safely.  One of the aims of the EO is to give leadership guidance that agencies should use and fund AI for mission execution and not be overly conservative and risk-averse in deploying the technology. It calls for guardrails in federal contracting practices, additional risk reviews, and advancements in acquisition policies and processes. Crucially, however, it does not call from a slow down or anti-innovation limitations. 

Commercial AI Requirements

  • Regulation on foreign use and sale of infrastructure as a service (IaaS) for AI. Continuing a trend of regulations coming out of the Department of Commerce, the EO places additional “Know Your Customer” requirements on IaaS providers to try to ensure shared responsibility for how foundational technologies are used to enable AI development and deployment. This is an attempt by the administration, through the Secretary of Commerce, to make it harder for foreign adversaries to use U.S. infrastructure to develop AI applications. It also places additional reporting burdens on U.S. companies. 
  • Dual use model requirements. One of the most interesting applications of artificial intelligence is through dual-use models, which are models that could use AI for both good and bad.  Whether for red-teaming or for advancing war-fighting, designing ways to weaponize AI is going to happen by U.S. companies. Such methods will also have the dual use of helping defend against our adversaries’ use of AI. It is important, however, that these efforts have a strong set of safeguards built in. The EO tries to do that by implementing strict reporting requirements for industry as well as undefined mandates on physical and cyber security for those models, including their training and algorithmic weights. There is likely to be urgency placed on developing these mandates.     
  • Critical infrastructure risk assessments for AI. The EO recognizes that many critical infrastructure sectors — particularly energy, healthcare, transportation and government services — are increasingly relying on AI for critical functions.  It calls on agencies with sector expertise and responsibility to work to assess the risk of those applications and design in proactive risk controls. The outcomes of these assessments and how they are used will be an issue to watch in 2024, as there is generally not a common risk assessment process in place currently. 

Advancing the AI Ecosystem

  • AI workforce initiatives. Echoing recent U.S. government actions on cyber workforce, the EO calls for the Office of Personnel and Management to make it easier for the U.S. government to hire and retain talent to work on AI issues, and other parts of the government to promote and fund efforts to build a national workforce. Left unanswered at this stage is what AI-related knowledge, skills, and abilities are. Presumably digital citizens with engineering skills and understanding of software and models and a commitment to ethical use of AI. Building and finding that workforce are no easy tasks. 
  • Visa policy changes. Related to the workforce challenges emphasized elsewhere, the EO directs agencies with immigration and visa responsibilities — mainly the Department of Homeland Security and the State Department — to extend policies to enable the U.S. to take advantage of expertise in AI held by non-U.S. citizens. One question associated with that will be how to do so while maintaining intellectual property protection from adversarial nations.  

Fundamental Protections

  • Model reviews for bias and adherence to civil rights and civil liberties. One of the most challenging public policy issues will be related to model and training transparency for use of AI to adhere to generally accepted legal frameworks against bias and to protect fundamental rights.  The EO acknowledges the importance of that issue and calls for tools to help auditors and law enforcement protect citizens, as well as ways for the public to weigh in. Keeping abreast of technical evolution to protect national values is a crucial element of why policy in this area is needed. 

There is much more involved in the policy implementation than the above items. By our count, there are more than 135 discrete tasks — some to multiple agencies — within the EO. It is a fair critique to wonder how all of that work can be done effectively, given the dearth of AI experts — implicitly acknowledged in the EO — across the federal government. 

It is our experience that not all tasks in executive orders end up leading to meaningful outcomes, and many fade in execution because of lack of resources and capacity. EOs don’t come with new funds or authorities, after all. Still, it is fair to expect that the next year will be one of momentous shifts in how the U.S. government approaches artificial intelligence and one that will set the tone for the next several years. 

For Exiger’s clients who are either consumers or producers of AI — an increasingly large segment of commercial and government entities — tracking the results of the reports, rules, standards and requirements that come from this EO will be increasingly important. Government agencies have the obligation to advance the use of AI in mission execution per the president, but must also set up the governance, processes and policies to do so safely and securely while building expertise; commercial entities will increasingly be seeing government scrutiny of how they work to advance AI and deploy it. In both cases, proactive risk analysis and management of supply chains and vendors supporting expansion of use of AI will be crucial. 

Contact us to learn more about Exiger’s AI solutions to identify risk and bring visibility to all tiers of your supply chain.

Other Resources:

Demo The
Exiger Platform