Lawmakers on the Senate Energy Committee were warned on Thursday about both the threats and opportunities that come with artificial intelligence being integrated into the U.S. energy sector and everyday life as a whole.
The committee held a hearing on the rapidly advancing technology, and experts present spent a significant amount of time not only discussing AI but the ever-looming threat of China and its efforts to steal and recreate emerging U.S. capabilities.
“China released their new generation of AI Development Plan, which includes [research and development] and infrastructure targets. The U.S. currently does not have a strategic AI plan like this,” Committee Chair Joe Manchin, D-W.Va., said at the hearing’s outset.
People will have AI ‘in their pockets’
Among the revelations the witnesses made on Thursday were about just how pervasive AI is going to be in daily life – as Professor Rick Stevens of the Argonne National Laboratory put it, “There’s no putting Pandora back in the box.”
He suggested that instead of trying to stymie its advancement, that officials and other Americans need to get educated quickly on how AI works and how to curb its negative impacts.
“I think we’re going to have…to get smarter about how we manage the risks associated with advanced AI systems,” Stevens said.
“Every person with the next few years is going to have a very powerful AI assistant in their pocket to do whatever it is they can get that assistant to help them to do. Hopefully most of that will be positive advances for society and so on. Some of that will be negative.
“We’ve got to be able to understand how to reduce that negative element, detect it when it happens and mitigate it either through laws or through other means technical means before something dramatically bad happens.”
DOE Deputy Secretary David Turk echoed the sentiment at another point in the hearing, pointing out that AI’s advancement “makes it easier for less sophisticated actors to do more sophisticated kinds of attacks.”
“The Pandora’s box is open. We now need to deal with it. And we need to take these kinds of emerging AI challenges head on,” Turk said “We’re not there, where we need to be. We need to make the investments we need to keep working at this.”
We need policies ‘for the China we have’
Anna Puglisi of Georgetown University’s Center for Security and Emerging Technology also warned senators that current U.S. policy surrounding our adversaries, specifically China, will not be sufficient in the rapidly changing tech landscape.
“We need to have policies for the China we have, not the China we want. Most policy measures to date have been tactical and not designed to counter an entire system that is structurally different than our own,” Puglisi said.
“It’s essential that the United States and other liberal democracies, democracies invest in the future. We’ve heard about the great promise of these technologies. But we must build research security into those funding programs from the start.
“Existing policies and laws insufficient to address the level of influence that the CCP exerts in our society, especially in academia and research.”
Turk later added that it was not China alone that posed a threat, and that the U.S.’s traditional opponents on the world stage also presented a host of new issues with AI.
“It’s not just China. There’s others as well, of course, Russia, Iran, North Korea,” Turk said. “The threat is evolving, and we need to evolve our responses accordingly…We are annually updating that risk matrix now so that we make sure that we are updating in terms of what technologies we consider sensitive, what protocols we have in place.”
Why regulation is not enough
Despite emphasizing the importance of guardrails to mitigate AI’s worst outcomes, hearing witnesses also cautioned that regulation can only go so far.
It comes as Senate Majority Leader Chuck Schumer is pushing his chamber to move forward with an AI regulatory framework even as some, mainly on the Republican side, worry it is too soon to do so.
Asked by Sen. Angus King, I-Maine, whether imposing watermark requirements on AI content would help mitigate issues with disinformation, Stevens explained it was a “flawed” approach.
“I think it’s flawed in the sense that there will be ultimately hundreds or thousands of generators of AI, some of which will be the big companies like Google and OpenAI and so forth. But there will be many open models produced outside the United States, and produced elsewhere, that of course wouldn’t be bound by U.S. regulation,” the scientist said.
“We can have a law that says ‘watermark AI-generated content,’ but a rogue player outside the [country] operating in Russia or China or somewhere wouldn’t be bound by that and could produce a ton of material that wouldn’t actually have those watermarks. And so it could pass a test, perhaps.”
Stevens said the U.S. approach must be “more strategic” than watermark label laws.
“We’re going to have to authenticate real content down to the source. Whether it’s true or not is a separate issue.”