Conference Brings University To Forefront of Discussion Around AI And National Security
Image Courtesy of The Catholic University of America
By Patrick D. Lewis
While Generative AI, such as ChatGPT, has concerned educators about the future of academic integrity, the revolutionary technology has also stirred concern amongst other circles, including national security.
The Department of Politics, the School of Theology and Religious Studies, and the Institute for Human Ecology hosted a conference about “Generative AI and National Security.” It was organized and moderated by politics professor Dr. Jonathan Askonas in Heritage Hall on January 31 and sponsored by Leidos, a defense contractor and engineering research firm.
Across four panel sessions and a keynote conversation, over a dozen experts in the fields of technology, defense, national security, journalists, academics, and others spoke about the biggest potential security liabilities that come with AI’s explosion in popularity. They also discussed the pros and cons of integrating this same technology into government agencies.
“Security is a cornerstone when using this technology,” said Ron Keesing, Leidos Senior Vice President for Technology Integration.
Examples of using generative AI in government could include assistance in completing mundane tasks like writing reports, filing paperwork, and authoring press releases and statements. However, this would still need oversight at some level. Again and again, the phrase “human in the loop” was used to describe the necessity of overseeing generative AI and ensuring it is performing as expected.
Generative AI, bots, and other forms of artificial intelligence have already proven some critics’ worries to be well-founded. Psychological operations have unfolded across the internet, particularly social media platforms, aimed at influencing elections and inducing civil unrest. Furthermore, countless financial and scam crimes have occurred with the use of AI.
Dr. Kiril Avramov, Assistant Professor at University of Texas, Austin and Director of the Global Disinformation Lab was one of many to argue that generative AI is unlikely to go anywhere, thus we need to embrace it while monitoring the potential for misuse.
“We have to adapt, not panic,” said Avramov.
The conference speakers did not singularly focus on the downsides of generative AI; they also emphasized its benefits.
Michael Kratsios, managing director at Scale AI, said, “It is incredibly powerful and it will impact everything in the world.”
He believes that when properly implemented, generative AI will change the world positively and that the Department of Defense and other government agencies should immediately begin investing in it.
Its potential is already being put to practical use: generative AI and AI, in general, are being used on battlefields in Ukraine. Panelists discussed the myriad of tasks facing commanders in battle and believe generative AI can help with these, leaving officers more time to deal with operational considerations. Adopting AI in military settings is even more necessary here in the U.S. because our enemies are using it extensively and, therefore, it will only get better.
The dangerous implications of AI was an important part of all of the discussions at the conference. The fourth and final panel discussed how to govern AI and how agencies should handle its implementation and use internally. Fears of a generative AI program that is smarter than humans are warranted, explained Lt. Colonel Joe Chapa, U.S. Air Force Chief Responsible AI Ethics Officer. He and other speakers did not rule out the scenario where lives would be at risk if a super-intelligence AI program was given significant physical powers and responsibilities.
Dr. Bianca Adair, Director of the Intelligence Studies Program at CUA, said it is especially important to discuss issues like this one at Catholic U.
“It’s good to keep in mind, especially as we’re here at CUA, that there is positive AI and negative AI,” Dr. Adair said. “An AI algorithm has no ethical core.”