June AI Strategies Newsletter
A newsletter of the latest news and research in the space of culture, political economy, and artificial intelligence from the GMU AiStrategies research team, a Minerva grant funded project.
AI Infrastructures and Their Consequences in Global Contexts
This project looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
June Team Member Blog
J.P. Singh, Principal Investigator
Blog: Prepping and Writing for the Minerva Grant
When Department of Defense’s Minerva Research Initiative issued its Funding Opportunity Announcement last summer, its theme of “Topic 7: Social and Cultural Implications of Artificial Intelligence” resonated with many of us at George Mason University. We are a group of transdisciplinary researchers affiliated with George Mason’s Center for Advancing Human-Machine Partnerships (CAHMP) spanning engineering, natural and social sciences, and the humanities. Puzzling about the social and cultural implications involves thinking about human-machine partnerships, and our team was excited to move forward with crafting a Minerva proposal.
Minerva funded 17 proposals this year. We were delighted to find out that George Mason received two of these grants. The proposals went through two stages of review. In the first stage, we submitted a three-pager in June and were then selected to submit the full proposal end-September.
I would like to use this blog to share with you three take aways from our team on writing our Minerva grant. Short version: Pooling ideas from an interdisciplinary team, and frequent brainstorming with writing and re-writing, we believe, were important for being awarded the Minerva grant.
Let the puzzle guide you on the required transdisciplinarity:
My own research puzzles about the ways cultural values, economic incentives, and institutional priorities influence technology infrastructures. Amarda Shehu and Antonios Anastasopoulos, the two computer scientists on our team, suggested ways in which we could use natural language processing and machine learning tools to build models that could analyze the ways in which a country’s values and institutional priorities are reflected in evolving artificial intelligence infrastructures. Artificial intelligence infrastructures raise a host of ethical questions for any society, in not just what values are inscribed in them but, more importantly, their ethical implications. Jesse Kirkpatrick, trained in ethics of AI systems brought this expertise to our team. Minerva FOAs are clear on the needed security implications for each grant proposal. Mike Hunzeker brought historical and theoretical knowledge to thinking about the implications of technologies for security.
We were also able to bring on board an excellent group of graduate students: Caroline Wesson is working on technology hubs and clusters as part of her dissertation research. Manpriya Dua’s dissertation examines machine learning models. Daniel Ofori-Addo’s dissertation looks at the determinants of ICT infrastructures in Sub-Saharan Africa. After the grant was awarded, Webby Burgess joined the team bringing a background in philosophy and good governance.
Create time for frequent brainstorming and to build transdisciplinary understandings:
Teams across disciplines need time to understand each other’s concepts, methods, and jargon. Our proposal, for example, required a translation of social science analyses into machine learning models. This required some understanding and multiple back and forth iterations on drafts. Description, for example, means something quite different in computer science than it does in social sciences. The task of translating for each other and brainstorming about the proposal’s outline was one of the most intellectually exciting parts of our proposal writing.
Supplementary materials require time and resources
Polished proposals take a great deal of writing and re-writing the proposal narrative, but also enormous time is taken up with supplementary documents and negotiations across units. Expect to spend time on all these activities. The team came with experienced proposal writers, and everyone understood that while the proposal narrative is the core of the proposal, other documents must also be in place and sometimes take time to generate. Good relationships with grants officers at the department, school, and university level are very helpful.
All in all our team spent four good months last summer and fall writing this proposal. Since receiving our award, the team has started work on our research and will be sharing updates with you monthly via this newsletter and our website on our progress.
Trending News:
The EU AI Act Will Have Global Impact but a Limited Brussels Effect - Brookings Institution (June 8th, 2022)
“The European Union’s (EU) AI Act (AIA) aspires to establish the first comprehensive regulatory scheme for artificial intelligence, but its impact will not stop at the EU’s borders. In fact, some EU policymakers believe it is a critical goal of the AIA to set a worldwide standard, so much so that some refer to a race to regulate AI.”
Defence Artificial Intelligence Strategy - United Kingdom Ministry of Defence (June 15th, 2022)
“This strategy sets out how we will adopt and exploit AI at pace and scale, transforming Defence into an ‘AI ready’ organisation and delivering cutting-edge capability; how we will build stronger partnerships with the UK’s AI industry; and how we will collaborate internationally to shape global AI developments to promote security, stability and democratic values. It forms a key element of the National AI Strategy and reinforces Defence’s place at the heart of Government’s drive for strategic advantage through science & technology.”
Artificial Intelligence White Paper (2022) - The China Academy of Information and Communications Technology, Translation from the Center for Security and Emerging Technology (CSET) (June 2022)
“This white paper focuses on providing a description from the dimensions of AI policy, technology, applications, and governance. At the policy level, the strategic position of AI has been continuously strengthened in China and abroad to promote the release of AI dividends. At the technical and application level, AI technology represented by deep learning has developed rapidly, and new technologies have begun to be explored and applied; engineering capabilities (工程化能力) have been continuously enhanced and continue to be deeply applied in fields such as healthcare, manufacturing, and autonomous driving; and trustworthy AI technology has attracted widespread attention from society. At the same time, governance-level work has also received a high level of attention from the world. The regulatory process continues to accelerate in various countries, and industrial practices based on trustworthy AI continue to deepen.”
U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway - United States DoD Responsible AI Working Council (June 2022)
“The Responsible AI (RAI) Strategy and Implementation (S&I) Pathway illuminates our path forward by defining and communicating our framework for harnessing AI. It helps to eliminate uncertainty and hesitancy — and enables us to move faster. Integrating ethics from the start also empowers the DoD to maintain the trust of our allies and coalition partners as we work alongside them to promote democratic norms and international standards.”