September AI Strategies Newsletter
AI Infrastructures and Their Consequences in Global Contexts
This project looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
The GMU AI Strategies team’s recent publications:
Alexander S Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos. “The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?”, In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, December 2024.
Abstract:
Large Language Models (LLMs) have shown capabilities close to human performance in various analytical tasks, leading researchers to use them for time and labor-intensive analyses. However, their capability to handle highly specialized and open-ended tasks in domains like policy studies remains in question. This paper investigates the efficiency and accuracy of LLMs in specialized tasks through a structured user study focusing on Human-LLM partnership. The study, conducted in two stages—Topic Discovery and Topic Assignment—integrates LLMs with expert annotators to observe the impact of LLM suggestions on what is usually human-only analysis. Results indicate that LLM-generated topic lists have significant overlap with human generated topic lists, with minor hiccups in missing document-specific topics. However, LLM suggestions may significantly improve task completion speed, but at the same time introduce anchoring bias, potentially affecting the depth and nuance of the analysis, raising a critical question about the trade-off between increased efficiency and the risk of biased analysis.
J.P. Singh, Amarda Shehu, Manpriya Dua, Caroline Wesson. Entangled Narratives: Insights from Social and Computer Sciences on National Artificial Intelligence Infrastructures. International Studies Quarterly. Forthcoming.
Abstract:
How do countries narrate their values and priorities in artificial in- telligence infrastructures in comparative national and global contexts? This paper analyzes the policies governing national and regional artifi- cial intelligence infrastructures to advance an understanding of ‘entangled narratives’ in global affairs. It does so by utilizing artificial intelligence techniques that assist with generalizability and model building without sacrificing granularity. In particular, the machine learning and natural language processing big data models used alongside some process-tracing demonstrate the ways artificial intelligence infrastructural plans diverge, cluster, and transform along several topical dimensions in comparative contexts. The paper’s entangled narrative approach adds to IR theorizing about infrastructural narratives and technological diffusion. We provide patterned and granular results at various levels, which challenge and refine existing theories that attribute differences in infrastructures and techno- logical adoption to geopolitical competition and imitation, top-down or linear international diffusion effects, and differences in political systems.
Manpriya Dua, J.P. Singh, Amarda Shehu. Health Equity in AI Development and Policy: An AI-enabled Study of International, National and Intra-national AI Infrastructures. AAAI 2024 Fall Symposium on Machine Intelligence for Equitable Global Health (MI4EGH).
Trending News:
The Precautionary Principle, Safety Regulation, and AI: This Time, It Really Is Different - American Enterprise Institute (September 4th, 2024)
“Generative pretrained transformers (GPTs)—such as large language models like ChatGPT, Claude, and Llama—come from a different computing paradigm than do traditional “big data” artificial intelligence models.
Traditional risk-management frameworks developed from the precautionary principle to address the risks of big data AI models (which current AI regulations are based on) are not well suited to manage GPT risks.
Using case-based regulation for specific applications rather than generic, overarching regulation is likely a more effective way to manage GPT AI risks.”
Ethical Use of AI: Navigating Copyright Challenges- GLOBSEC, Center for Democracy and Resilience (September 19th, 2024)
“Artificial intelligence (AI) is transforming creativity and innovation, but it also poses new challenges for intellectual property (IP) rights. As AI technologies increasingly use existing creative works for training, legal battles over copyrights and authorship are becoming more common.”
If-Then Commitments for AI Risk Reduction - Carnegie Endowment for International Peace, (September 13th, 2024)
“If-then commitments are an emerging framework for preparing for risks from AI without unnecessarily slowing the development of new technology. The more attention and interest there is in these commitments, the faster a mature framework can progress.
This piece explains the key ideas behind if-then commitments via a detailed walkthrough of a particular if-then commitment, pertaining to the potential ability of an AI model to walk a novice through constructing a chemical or biological weapon of mass destruction. It then discusses some limitations of if-then commitments and closes with an outline of how different actors—including governments and companies—can contribute to the path toward a robust, enforceable system of if-then commitments.”
International Norms Development and AI in the Military Domain - Center for International Governance Innovation (September 5th, 2024)
“This paper argues that it is important to lay the normative foundations for future norms developments now by going beyond conceptual issues while delving into technical and operational specifics. Simultaneously, it is essential to start creating an institutionalized regime of norms, rules and regulations to guide state behaviour, focusing on the entire production- proliferation-deployment-employment chain. These elements will contribute to a robust governance framework for AI in the military domain.”
The Macroeconomic Effects of Artificial Intelligence - Congressional Research Service (September 12th, 2024)
“While various forms of artificial intelligence (AI) technologies have existed and been used for decades, the recent popularization of AI products such as ChatGPT have spurred further research and debate about how these technologies could impact the economy. AI potentially has wide-ranging uses in the production of goods and services, which could affect the macroeconomy through the labor market, productivity growth, and economic growth, to name a few. However, whether AI will prove to be as economically transformational as some suggest remains to be seen and will depend on a number of complex factors. Members of Congress are increasingly interested in AI, including its economic impacts. For example, the House announced a bipartisan task force on AI in February 2024.”