January AI Strategies Newsletter
AI Infrastructures and Their Consequences in Global and Local Contexts: Minerva and CASE
These projects looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
Announcement
We are excited to announce that George Mason University’s AI Strategies team has launched a renewed interface for our website, AI Strategies.
Center for AI and SME Excellence (CASE) is now available on our AI Strategies website, making it easier to access and engage with our analysis. Check out the updates and stay informed on AI policy and strategy!
The AI Strategies team examines two large research projects: Minerva, and CASE.
Minerva: Examines how cultural values and institutional priorities shape artificial intelligence infrastructures in national and global contexts, in order to better understand the effects of comparative AI contexts for security.
CASE: Aims at bolstering the economic competitiveness of small businesses in Virginia by harnessing the power of Artificial Intelligence.
The GMU AI Strategies team’s recent publications:
Singh, J.P., Amarda Shehu, Manpriya Dua, and Caroline Wesson (2025) Entangled Narratives: Insights from Social and Computer Sciences on National Artificial Intelligence Infrastructures. International Studies Quarterly , https://doi.org/10.1093/isq/sqaf001
Abstract:
How do countries narrate their values and priorities in artificial intelligence infrastructures in comparative national and global contexts? This paper analyzes the policies governing national and regional artificial intelligence infrastructures to advance an understanding of ‘entangled narratives’ in global affairs. It does so by utilizing artificial intelligence techniques that assist with generalizability and model building without sacrificing granularity. In particular, the machine learning and natural language processing big data models used alongside some process-tracing demonstrate the ways artificial intelligence infrastructural plans diverge, cluster, and transform along several topical dimensions in comparative contexts. The paper’s entangled narrative approach adds to IR theorizing about infrastructural narratives and technological diffusion. We provide patterned and granular results at various levels, which challenge and refine existing theories that attribute differences in infrastructures and techno- logical adoption to geopolitical competition and imitation, top-down or linear international diffusion effects, and differences in political systems.
Amarda Shehu, Jesse Kirkpatrick, J.P. Singh, “Key Take Aways from the U.S. National Security Memorandum and Framework” GMU, October, 2024.
“On October 24, 2024, President Biden released the long-anticipated U.S. National Security Memorandum on AI (the memo) alongside a complementary Framework to Advance AI Governance and Risk Management in National Security (the framework).
Click to see key takeaways from the memo and framework.”
J.P. Singh, “Unpacking the ‘AI wardrobe’: How national policies are shaping the future of AI”, The AI Wonk, OECD AI Policy Observatory, October 30, 2024.
“Academic research can provide empirically grounded answers to the above declarations in the sea of high-level insights about AI’s impact. My colleagues and I are doing significant work whose findings are evidence-driven. They confirm some of these declarations while providing sobering counterpoints to others.
We coined the phrase ‘AI Wardrobe’ in 2022 to connote the variable mix of common elements for understanding national AI policy infrastructures. Individual national AI policies combine similar items, but the wardrobe looks different in each context. The AI wardrobe consists of macro issues such as types of research capabilities, workforce development, data regulation policies, and international collaboration. Leaders like the United States, EU, China, Japan, and Korea showcase high basic research capabilities, whereas developing countries might adopt approaches encouraging tech hubs.”
Alexander S Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos. “The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?”, In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, December 2024.
Abstract:
Large Language Models (LLMs) have shown capabilities close to human performance in various analytical tasks, leading researchers to use them for time and labor-intensive analyses. However, their capability to handle highly specialized and open-ended tasks in domains like policy studies remains in question. This paper investigates the efficiency and accuracy of LLMs in specialized tasks through a structured user study focusing on Human-LLM partnership. The study, conducted in two stages—Topic Discovery and Topic Assignment—integrates LLMs with expert annotators to observe the impact of LLM suggestions on what is usually human-only analysis. Results indicate that LLM-generated topic lists have significant overlap with human generated topic lists, with minor hiccups in missing document-specific topics. However, LLM suggestions may significantly improve task completion speed, but at the same time introduce anchoring bias, potentially affecting the depth and nuance of the analysis, raising a critical question about the trade-off between increased efficiency and the risk of biased analysis.
Manpriya Dua, J.P. Singh, Amarda Shehu. Health Equity in AI Development and Policy: An AI-enabled Study of International, National and Intra-national AI Infrastructures. AAAI 2024 Fall Symposium on Machine Intelligence for Equitable Global Health (MI4EGH).
Trending News and Reports:
What is DeepSeek and why is it disrupting the AI sector?, Reuters, (January 28th, 2025)
“Chinese startup DeepSeek's launch of its latest AI models, which it says are on a par or better than industry-leading models in the United States at a fraction of the cost, is threatening to upset the technology world order.
Liang's fund announced in March 2023 on its official WeChat account that it was "starting again", going beyond trading to concentrate resources on creating a "new and independent research group, to explore the essence of AGI" (Artificial General Intelligence). DeepSeek was created later that year.
It is unclear how much High-Flyer has invested in DeepSeek. High-Flyer has an office located in the same building as DeepSeek, and it also owns patents related to chip clusters used to train AI models, according to Chinese corporate records.”
Trump announces a $500 billion AI infrastructure investment in the US, CNN, (January 21st, 2025)
“Stargate will build “the physical and virtual infrastructure to power the next generation of AI,” including data centers around the country, Trump said. Ellison said the group’s first, 1 million-square foot data project is already under construction in Texas.
AI leaders have for months been sounding the alarm that more data centers — as well as the chips and electricity and water resources to run them — are needed to power their artificial intelligence ambitions in the coming years.”
AI-Generated Regulation: Not Ready for Prime Time (Yet), AEI, (January 16th, 2025)
“Generative AI models do not appear ready to draft regulations for government agency staff. The policies in AI-drafted rules are overly influenced by the number of comments supporting a particular position, and the reasoning is too cursory.
Current generative AI models show more promise in summarizing comments, which will likely save agency staff significant time and reduce the likelihood that they overlook important comments.
Technology will undoubtedly advance, and future versions of AI may improve at evaluating the merits of comments and responding to them.”
European AI Regulation: Opportunities, Risks, and Future Scenarios from a Metropolitan Perspective, CIDOB, (December, 2024)
“This document brings together the conclusions of the seminar “European AI Regulation. Opportunities, Risks, and Future Scenarios from a Metropolitan Perspective” which, held on 19 September 2024 at CIDOB, was attended by academics and experts—from public and private sectors—on the regulation and management of AI technologies. The seminar is part of CIDOB’s research and foresight programme in geopolitics and international relations, with support from the Barcelona Metropolitan Area. This programme aims to offer knowledge to the general and specialist public, to produce publications and stimulate debate, and to incorporate new foresight methodologies into the analysis of the today’s most urgent international challenges.”
Leveraging AI and emerging technologies to unlock Africa’s potential, Brookings, (January 13th, 2025)
“With the latest projections estimating that Africa is on track to meet less than 6% of the UN Sustainable Development Goals SDGs by 2030, the international development community and policymakers alike are looking for development accelerators that can maximize impact and ultimately deliver on the goals of Agenda 2063.”
Understanding the Artificial Intelligence Diffusion Framework: Can Export Controls Create a U.S.-Led Global Artificial Intelligence Ecosystem?, RAND, (February 2nd, 2025)
This paper describes the strategic and national security implications of this new framework, which is intended to achieve these seemingly competing objectives by carefully managing how AI technology is diffused globally.”
DeepSeek Is a Win for China in the A.I. Race. Will the Party Stifle It?, The New York Times, (January 31st, 2025)
“DeepSeek has shown that it might be possible for China to make A.I. cheaper and more accessible for everyone. The question, though, is how the ruling Communist Party manages the rise of a technology that could one day be so disruptive that it could threaten its interests — and its grip on power.
Now that the pendulum has swung the other way, that confidence in the industry could prove to be a “double-edged sword,” said Matt Sheehan, who studies Chinese A.I. as a fellow at the Carnegie Endowment for International Peace.
The party’s “core instincts are toward control,” Mr. Sheehan said. “As they regain confidence in China’s A.I. capabilities, they may have a hard time resisting the urge to take a more hands-on approach to these companies.”
Trump is right to invest in AI development. But is it too late to beat China?, USA Today, (February 2nd, 2025)
I has suddenly jumped to the top of the news cycle due to early Trump administration actions in the arena and new platforms emerging out of China. Pundits have likened China's release of a new AI model to the "Sputnik moment" of the AI race. We must respond appropriately.
America has a strong adversary in the AI war against China, and Trump’s early actions are promising, but we must invest more and remove regulatory roadblocks.
China has publicly stated its goal is to be the global leader in AI by 2030. America has maintained a leader in that race in developing publicly accessible platforms.”