February AI Strategies Newsletter
AI Infrastructures and Their Consequences in Global and Local Contexts: Minerva and CAIIEC
These projects look at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
Announcement
We are excited to announce that George Mason University’s AI Strategies team has launched a renewed interface for our website, AI Strategies.
Center for AI Innovation for Economic Competitiveness (CAIIEC) is now available on our AI Strategies website, making it easier to access and engage with our analysis. Check out the updates and stay informed on AI policy and strategy!
The AI Strategies team examines two large research projects: Minerva, and CAIIEC.
Minerva: Examines how cultural values and institutional priorities shape artificial intelligence infrastructures in national and global contexts, in order to better understand the effects of comparative AI contexts for security.
CAIIEC: Aims at bolstering the economic competitiveness of small businesses in Virginia by harnessing the power of Artificial Intelligence.
The GMU AI Strategies team’s recent publications:
Dua, M., Singh, J.P. & Shehu, A. (2025) The ethics of national artificial intelligence plans: an empirical lens. AI Ethics. https://doi.org/10.1007/s43681-025-00663-2.
Abstract
Over fifty countries have published national infrastructure and strategy plans on Artificial Intelligence (AI), outlining their values and priorities regarding AI research, development, and deployment. This paper utilizes a deliberation and capabilities-based ethics framework rooted in providing freedom of agency and choice to human beings– to investigate how different countries approach AI ethics within their national plans. We explore the commonalities and variations in national priorities and their implications for a deliberation and capabilities-based ethics approach. Combining established and novel methodologies such as content analysis, graph structuring, and generative AI, we uncover a complex landscape where traditional geostrategic formations intersect with new alliances, thereby revealing how various groups and associated values are prioritized. For instance, the Ibero-American AI strategy highlights strong connections among Latin American nations, particularly with Spain, emphasizing gender diversity but pragmatically and predominantly as a workforce issue. In contrast, a US-led coalition of “science and tech first movers" is more focused on advancing foundational AI and diverse applications. The European Union AI strategy showcases leading states like France and Germany while addressing regional divides, with more focus and detail on social mobility, sustainability, standardization, and democratic governance of AI. These findings offer an empirical lens into the current global landscape of AI development and ethics, revealing distinct national trajectories in the pursuit of ethical AI.
Singh, J.P., Amarda Shehu, Manpriya Dua, and Caroline Wesson (2025) Entangled Narratives: Insights from Social and Computer Sciences on National Artificial Intelligence Infrastructures. International Studies Quarterly , https://doi.org/10.1093/isq/sqaf001
Abstract:
How do countries narrate their values and priorities in artificial intelligence infrastructures in comparative national and global contexts? This paper analyzes the policies governing national and regional artificial intelligence infrastructures to advance an understanding of ‘entangled narratives’ in global affairs. It does so by utilizing artificial intelligence techniques that assist with generalizability and model building without sacrificing granularity. In particular, the machine learning and natural language processing big data models used alongside some process-tracing demonstrate the ways artificial intelligence infrastructural plans diverge, cluster, and transform along several topical dimensions in comparative contexts. The paper’s entangled narrative approach adds to IR theorizing about infrastructural narratives and technological diffusion. We provide patterned and granular results at various levels, which challenge and refine existing theories that attribute differences in infrastructures and techno- logical adoption to geopolitical competition and imitation, top-down or linear international diffusion effects, and differences in political systems.
Amarda Shehu, Jesse Kirkpatrick, J.P. Singh, “Key Take Aways from the U.S. National Security Memorandum and Framework” GMU, October, 2024.
“On October 24, 2024, President Biden released the long-anticipated U.S. National Security Memorandum on AI (the memo) alongside a complementary Framework to Advance AI Governance and Risk Management in National Security (the framework).
Click to see key takeaways from the memo and framework.”
J.P. Singh, “Unpacking the ‘AI wardrobe’: How national policies are shaping the future of AI”, The AI Wonk, OECD AI Policy Observatory, October 30, 2024.
“Academic research can provide empirically grounded answers to the above declarations in the sea of high-level insights about AI’s impact. My colleagues and I are doing significant work whose findings are evidence-driven. They confirm some of these declarations while providing sobering counterpoints to others.
We coined the phrase ‘AI Wardrobe’ in 2022 to connote the variable mix of common elements for understanding national AI policy infrastructures. Individual national AI policies combine similar items, but the wardrobe looks different in each context. The AI wardrobe consists of macro issues such as types of research capabilities, workforce development, data regulation policies, and international collaboration. Leaders like the United States, EU, China, Japan, and Korea showcase high basic research capabilities, whereas developing countries might adopt approaches encouraging tech hubs.”
Alexander S Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos. “The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?”, In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, December 2024.
Abstract:
Large Language Models (LLMs) have shown capabilities close to human performance in various analytical tasks, leading researchers to use them for time and labor-intensive analyses. However, their capability to handle highly specialized and open-ended tasks in domains like policy studies remains in question. This paper investigates the efficiency and accuracy of LLMs in specialized tasks through a structured user study focusing on Human-LLM partnership. The study, conducted in two stages—Topic Discovery and Topic Assignment—integrates LLMs with expert annotators to observe the impact of LLM suggestions on what is usually human-only analysis. Results indicate that LLM-generated topic lists have significant overlap with human generated topic lists, with minor hiccups in missing document-specific topics. However, LLM suggestions may significantly improve task completion speed, but at the same time introduce anchoring bias, potentially affecting the depth and nuance of the analysis, raising a critical question about the trade-off between increased efficiency and the risk of biased analysis.
Manpriya Dua, J.P. Singh, Amarda Shehu. Health Equity in AI Development and Policy: An AI-enabled Study of International, National and Intra-national AI Infrastructures. AAAI 2024 Fall Symposium on Machine Intelligence for Equitable Global Health (MI4EGH).
Abstract
This study examines how concerns related to equity in AI for health are reflected at the international, national, and sub-national level. Utilizing unsupervised learning over corpora of published AI policy documents and graph structurization and analysis, the research identifies and visualizes the presence and variation of these concerns across different geopolitical contexts. The findings reveal interesting differences in how these issues are prioritized and addressed, highlighting the influence of local policies and cultural factors. The study underscores the importance of tailored approaches to AI governance in healthcare, advocating for increased global collaboration and knowledge sharing to ensure equitable and ethical AI deployment. By providing a comprehensive analysis of policy documents, this research contributes to a deeper understanding of the global landscape of AI in health, potentially offering insights for policymakers and stakeholders.
Caroline Wesson. 2024. “Three Essays on Economic Clusters and Their Determinants, Structures, and Transformations”. Ph.D Dissertation in Political Science. Schar School of Policy and Government, George Mason University.
Caroline Wesson. 2024. Tech Hubs and the AI Industry in Africa: What Drives? Global Perspectives. 5 (1): 117231. https://doi.org/10.1525/gp.2024.117231.
Abstract
Tech hubs—usually associated with San Francisco, London, Seoul, and Bangalore—are physical spaces that house activities that result in innovation, scientific advancement, and the creation of start-ups. Yet over 650 tech hubs now exist on the African continent, and many have a focus on artificial intelligence. The rapid emergence of tech hubs in Africa presents a puzzle, as traditional notions about where tech hubs emerge is challenged by this reality. This article explores how technical expertise and foreign direct investment contribute to the rising number of tech hubs in a country. Regression analysis supplemented with case study analyses of Ghana, Tanzania, and South Africa show that technical expertise and foreign direct investment are correlated with the number of tech hubs in an African country. This regression analysis focuses on the artificial intelligence industry, which has been highlighted as particularly important by national governments and hubs within the region. African countries are not typically thought of as global leaders in technology, yet the proliferation of tech hubs presents a narrative of a region fostering an exciting, innovative ecosystem.
Trending News and Reports:
AI Cyber Defense: How to Spot AI Cyber Attacks, Tech.co, (March 3th, 2025)
“Security concerns and data breaches are a perennial problem on the internet. Last year, a mega breach exposed 26 billion records, and in 2025, massive data leaks are continuing to plague businesses and consumers alike. Now, though, advances in generative artificial intelligence have added a new dimension to cybersecurity concerns. This might look like a scam that only AI makes possible, like a fake voice or video call, or it could be a supercharged phishing attempt. Often, a company’s AI tools themselves can be a target: Last year, 77% of businesses reported a breach to their AI systems.
Here, we’ll explain all the top ways that AI and security threats can tangle with each other, along with all the top tips for getting away unscathed.”
How unchecked AI could trigger a nuclear war, Brookings, (February 28th, 2025)
“This article discusses the potential dangers of integrating artificial intelligence (AI) into nuclear command and control systems without adequate oversight. It highlights concerns that AI-driven automation could accelerate decision-making processes, increasing the risk of miscalculations or unintended escalations. It also emphasizes that, unlike humans, AI may lack the necessary thoughtfulness and compassion when facing critical decisions, potentially leading to catastrophic outcomes. .”
CERTAIN drives ethical AI compliance in Europe, AI News, (February 26th, 2025)
“According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges.
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act.”
Collaborating to advance research and innovation on essential chips for AI, MIT News, (February, 2025)
“MIT and GlobalFoundries (GF), a leading manufacturer of essential semiconductors, have announced a new research agreement to jointly pursue advancements and innovations for enhancing the performance and efficiency of critical semiconductor technologies. The collaboration will be led by MIT’s Microsystems Technology Laboratories (MTL) and GF’s research and development team, GF Labs.
“The collaboration between MIT MTL and GF exemplifies the power of academia-industry cooperation in tackling the most pressing challenges in semiconductor research,” says Tomás Palacios, MTL director and the Clarence J. LeBel Professor of Electrical Engineering and Computer Science. Palacios will serve as the MIT faculty lead for this research initiative.”
How Might the United States Engage with China on AI Security Without Diffusing Technology?, RAND, (January 30th, 2025)
“Given the transnational risks posed by AI, the safety of AI systems, wherever they are developed and deployed, is of concern to the United States. Since China develops and deploys some of the world's most advanced AI systems, engagement with this U.S. competitor is especially important.
The U.S. AI Safety Institute (AISI)—a new government body dedicated to promoting the science and technology of AI safety—is pursuing a strategy that includes the creation of a global network of similar institutions to ensure AI safety best practices are “globally adopted to the greatest extent possible.”
What DeepSeek Revealed About the Future of U.S.-China Competition, Foreign Policy, (February 3rd, 2025)
“DeepSeek’s extraordinary success has sparked fears in the U.S. national security community that the United States’ most advanced AI products may no longer be able to compete against cheaper Chinese alternatives. If that fear bears out, China would be better equipped to spread models that undermine free speech and censor inconvenient truths that threaten its leaders’ political goals, on topics such as Tiananmen Square and Taiwan. As these systems grow more powerful, they have the potential to redraw global power in ways we’ve scarcely begun to imagine. Whichever country builds the best and most widely used models will reap the rewards for its economy, national security, and global influence.”
At the Paris AI Summit, Europe Charts Its Course, RAND, (February 28th, 2025)
“The recent Paris AI Action Summit laid bare an inconvenient truth for those who dismiss Europe as merely AI's regulatory heavyweight. As U.S. Vice President JD Vance and Chinese Vice Premier Ding Xuexiang convened with leaders from over 40 countries, they encountered a European AI landscape that defied its reputation as a monolith. While Brussels charts its frameworks and infrastructure initiatives, national governments are forging ahead with strategic investments and partnerships. The summit's location in France was no coincidence. It showcased Europe's emerging role not just as a regulatory pioneer, but as an innovation laboratory where AI models are being tested and refined for practical application and diffusion.”
New Government Policy Shows Japan Favors a Light Touch for AI Regulation, CSIS, (February 25th, 2025)
“Japan, like its U.S. and EU allies, is hitting the brakes on the race to regulate AI. On February 4, 2025, the Japanese government’s Cabinet Office released the interim report of its AI Policy Study Group (henceforth “the Interim Report,” or “the Report”), which outlined a very different vision for AI regulation than the country’s two reports from the first half of the previous year. This CSIS white paper outlines the direction of Japan’s AI regulatory approach in 2025, based on the contents of the Interim Report, while also incorporating Japan’s response to the so-called DeepSeek Shock. A summary of the Interim Report is provided in the Appendix for reference.”