Politics Department Hosts Discussion On AI & Geopolitics
Photo by Patrick D. Lewis
By Patrick D. Lewis
Catholic University’s Department of Politics sponsored a panel discussion on artificial intelligence and its implications for geopolitics on Thursday, December 4.
The panel was composed of Dr. Victor McCrary, university vice provost for national security innovation, and Ambassador Ramón Blecua, a Spanish diplomat who had served in Iraq, India, and other locations around the world as part of the Spanish Foreign Service, as well as a representative of the European Union. Politics professor Dr. Michael Promisel moderated the discussion.
Blecua and McCrary discussed AI’s effects on geopolitics and the implications for the world of politics, and what CUA’s role in the debates around it can be. Blecua called the event an “excellent opportunity to debate, discuss, and bring up ideas about the kind of world in which we are heading.”
Blecua said he believes four topics are central to the effects AI is having and will have in the future on geopolitics: geography, technology, demography, and ideology. He said AI’s effects are “about the narratives being transformed,” not just geopolitics, and all four ideas are quickly “changing beyond recognition.”
Blecua said none of the four topics is similar to what they were when he started his career 35 years ago. Trade routes, activities in the Arctic, and climate change are all major topics now, and Russia and China have reemerged as near-peer competitors. Blecua called that part of a “return of Eurasia,” which he described as the rivalries of old world empires being “reenacted” by modern powers in Asia and the Middle East. He said this is an indicator of the erosion of the “liberal international order” that was established by the U.S. and its allies after World War Two. Blecua said the post-war order is “being put into question, and not least of all in the U.S. itself.”
The United Nations has found itself increasingly irrelevant, something else Blecua said could lead to a more dangerous world. All of this means that the great power competitions of the 19th century have returned, and developing AI is a big part of their race to dominate. Blecua said he is worried about the AI becoming smarter than humans to the point that we will not be able to comprehend decisions it makes and systems, including autonomous weapons, that it designs and operates, which might mean we are unable to stop it. Blecua said the war in Ukraine, which is the “first AI war in history,” has shown some of these dangers already.
Blecua said non-state actors, including terrorists and cartels but also legitimate enterprises such as large corporations and tribal and religious networks, are now the only actors with “agency,” something that means geopolitics is changing to include those non-state actors in its most important discussions. Even the Church is one of these actors, and Blecua noted that Pope Leo, on his recent trip to the Middle East, was “received as an actor of the utmost importance,” an honor that heads of state in Europe are no longer afforded.
Those non-state actors are some of the same ones that are developing AI. McCrary noted that, in the U.S., AI is being developed by the private sector and other areas of tech, such as communications, are also owned and developed by commercial entities, which makes it harder for the government to control it. In China, on the other hand, the major corporations are owned and controlled by the state, which means the government can more tightly regulate AI.
However, McCrary said China is less interested in regulating AI than the U.S. is. Here, the people also have a measure of control by voting for or against officials they agree or don’t agree with in terms of AI policy, and consumers of commercial AI products can choose not to buy and use them anymore, which would lead to their demise financially.
McCrary also spoke about the role of ethics in AI development. “Right now, the next step in AI is theory of mind,” he said. That means AI is developing a mind more capable than a human, which could be disastrous without a human in the loop. “Who’s gonna play that role in between?” McCrary asked. He said that, without a person in the loop, an advanced form of AI could make decisions for reasons we can’t even comprehend.
“I ask you all… to think about that. Because a lot of people are not looking at causality,” he said. “We have a higher calling here,” he continues, saying CUA students and faculty are uniquely positioned to have conversations about the ethics and implications of AI. “We have a duty as citizens to inject ourselves into those conversations,” he said.
That’s also important in the realm of geopolitics itself, where the U.S. now finds itself in a cold war with China. McCrary said influence will be critical, especially in the Global South, where China’s Belt and Road Initiative has caused many countries to trust China instead o the U.S. However, McCrary said the U.S. has one resource pool that should be tapped more extensively: immigrants. He said many first, second, and third-generation immigrants from countries in the Global South are working in the U.S., and those countries might trust those people more than Chinese businessmen and government officials. CUA also has the advantage of diversity, McCrary said. He said the university should be a “venue for this conversation” and take advantage of its “vast demography.”
Finally, both McCrary and Blecua talked about the need for a “moral compass” for AI, something that it won’t develop on its own but which we can program into it.
“What we need is a moral compass,” said Blecua. “Technology doesn’t have a moral compass… the question is, do we have the will and the ability to put a moral compass in?”
