Mary Kopczynski, CEO of RegAlytics, breaks down this week’s hot regulatory topics, exclusively for Exiger.
Exiger Update – EO 14110
This episode focuses on Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
This sweeping executive order calls on many federal agencies as well as Congress to address safety and artificial intelligence. Here’s the breakdown of 88-page document.
Safety Tests for AI
Developers of the most powerful AI systems will have to share their safety test results and other critical information with the U.S. government.
Commerce: IaaS Providers Will Disclose More Information
This also includes a rule that an Infrastructure-as-a-Service Provider must submit a report to Commerce outlining the identity of any foreign person or entity contracting with it to train large AI models. It also won’t allow foreign resellers of Infrastructure-as-a-Service unless the foreign reseller submits reports that the Department of Commerce will be creating.
NIST: Setting Standards
That National Institute of Standards and Technology (NIST) is going to design standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy.
Homeland: AI Safety and Security Board
Homeland Security is creating the AI Safety and Security Board, which is an Advisory Committee to help guide how AI is used in critical infrastructure.
Dangerous Weapon Prevention
Sector risk management agencies, including the Department of Energy, will address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.
Any investments or grants by any health-related government agency are going to be vetted AI projects to prevent engineering of dangerous biological weapons.
Commerce: AI Watermarking
The Department of Commerce is tasked with content authentication and watermarking to clearly label AI-generated content.
There is already a previously existing Executive Order having to do with having to do with algorithmic discrimination, but this EO adds even more. It calls for clear guidance to landlords, federal benefit programs, federal contractors and the criminal justice systems to keep AI algorithms from exacerbating discrimination.
Additionally, HHS is going to be passing rules about preventing AI bias for people with disabilities, including eye tracking, gait (as in walking) analysis, and coming up with a strategy for data quality. This way, if AI has some sort of medical conclusion, it is appropriately tested against real-world data to confirm its accuracy, with some reporting mechanism to identify when it is not.
There is then a whole section on privacy. The president is calling on Congress to pass bipartisan data privacy legislation.
And in the meantime, the Executive Branch is doing what it can, absent the legislation, for example, by funding research that improves cryptography capabilities. This includes funding for veterans and small businesses for training, investments, growth and participation in AI sprints.
Training / Workforce Development
Then we get into the impact of artificial intelligence and labor. The principle driving this is to ensure “the companies and technologies of the future are made in America.” So, in addition to worker training, it includes an immigration push to “attract the world’s AI talent to our shores — not just to study, but to stay.” The Secretary of State is directed to add visa allowances for AI skills or skills in other critical and emerging technologies. So that should be good news for some of you trying to bring experts in from overseas.
The order commissions a study from the Department of Labor to understand potential impacts of AI and how existing training programs can be leveraged to minimize harm to workers.
Even the Department of Transportation is being tasked with pieces of this for the safety of planes, trains and anything that goes and AI, including autonomous vehicles.
And, finally, the Department of Education needs to come up with an AI toolkit for educators on the responsible use of AI in the classroom.
There are more requirements — mostly applicable to the Federal Government itself — but I want to stop there and just marvel at what this is going to mean for all of us in the coming year.
What You Can Do
The beauty of having this information now is that you can start planning. If your company is dabbling in AI or perhaps standing on the sidelines waiting to use, here is some practical advice.
One, have a central place where you are tracking the use of AI at your company. Put a person in charge of it. Hire someone. Do whatever you need, including updating your HR handbook and your policies and procedures.
Two, take a moment to think about your downstream customers. The ones in the U.S. The ones out of the U.S. What are they going to need from you to be responsible about AI? Should you set up disclosures, for example?
Then, it’s time to think about your vendors and your supply chain. It’s time for you to update your questions. Maybe it’s a simple survey that asks if your suppliers are using AI or not. Or perhaps it’s something more elaborate. But I can’t recommend enough that you start now before it’s too late and too hard to track.
Thanks for joining me for Exiger’s Regulatory update.