August AI Strategies Newsletter
AI Infrastructures and Their Consequences in Global Contexts
This project looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
Team Member Blog
J.P. Singh, Principal Investigator
Blog: Prepping and Writing for the Minerva Grant (Written June 2022)
When Department of Defense’s Minerva Research Initiative issued its Funding Opportunity Announcement last summer, its theme of “Topic 7: Social and Cultural Implications of Artificial Intelligence” resonated with many of us at George Mason University. We are a group of transdisciplinary researchers affiliated with George Mason’s Center for Advancing Human-Machine Partnerships (CAHMP) spanning engineering, natural and social sciences, and the humanities. Puzzling about the social and cultural implications involves thinking about human-machine partnerships, and our team was excited to move forward with crafting a Minerva proposal.
Minerva funded 17 proposals this year. We were delighted to find out that George Mason received two of these grants. The proposals went through two stages of review. In the first stage, we submitted a three-pager in June and were then selected to submit the full proposal end-September.
I would like to use this blog to share with you three take aways from our team on writing our Minerva grant. Short version: Pooling ideas from an interdisciplinary team, and frequent brainstorming with writing and re-writing, we believe, were important for being awarded the Minerva grant.
Let the puzzle guide you on the required transdisciplinarity:
My own research puzzles about the ways cultural values, economic incentives, and institutional priorities influence technology infrastructures. Amarda Shehu and Antonios Anastasopoulos, the two computer scientists on our team, suggested ways in which we could use natural language processing and machine learning tools to build models that could analyze the ways in which a country’s values and institutional priorities are reflected in evolving artificial intelligence infrastructures. Artificial intelligence infrastructures raise a host of ethical questions for any society, in not just what values are inscribed in them but, more importantly, their ethical implications. Jesse Kirkpatrick, trained in ethics of AI systems brought this expertise to our team. Minerva FOAs are clear on the needed security implications for each grant proposal. Mike Hunzeker brought historical and theoretical knowledge to thinking about the implications of technologies for security.
We were also able to bring on board an excellent group of graduate students: Caroline Wesson is working on technology hubs and clusters as part of her dissertation research. Manpriya Dua’s dissertation examines machine learning models. Daniel Ofori-Addo’s dissertation looks at the determinants of ICT infrastructures in Sub-Saharan Africa. After the grant was awarded, Webby Burgess joined the team bringing a background in philosophy and good governance.
Create time for frequent brainstorming and to build transdisciplinary understandings:
Teams across disciplines need time to understand each other’s concepts, methods, and jargon. Our proposal, for example, required a translation of social science analyses into machine learning models. This required some understanding and multiple back and forth iterations on drafts. Description, for example, means something quite different in computer science than it does in social sciences. The task of translating for each other and brainstorming about the proposal’s outline was one of the most intellectually exciting parts of our proposal writing.
Supplementary materials require time and resources
Polished proposals take a great deal of writing and re-writing the proposal narrative, but also enormous time is taken up with supplementary documents and negotiations across units. Expect to spend time on all these activities. The team came with experienced proposal writers, and everyone understood that while the proposal narrative is the core of the proposal, other documents must also be in place and sometimes take time to generate. Good relationships with grants officers at the department, school, and university level are very helpful.
All in all our team spent four good months last summer and fall writing this proposal. Since receiving our award, the team has started work on our research and will be sharing updates with you monthly via this newsletter and our website on our progress.
Trending News:
AI Regulation: Where do China, the EU, and the U.S. Stand Today? - The National Law Review (August 3rd, 2022)
“Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.”
Artificial Intelligence and Democratic Values: Next Steps for the United States - Council on Foreign Relations (August 22nd, 2022)
“More than fifty years after a research group at Dartmouth University launched work on a new field called “Artificial Intelligence,” the United States still lacks a national strategy on artificial intelligence (AI) policy. The growing urgency of this endeavor is made clear by the rapid progress of both U.S. allies and adversaries.
Europe is moving forward with two initiatives of far-reaching consequence. The EU Artificial Intelligence Act will establish a comprehensive, risk-based approach for the regulation of AI when it is adopted in 2023. Many anticipate that the EU AI Act will extend the “Brussels Effect” across the AI sector as the earlier European data privacy law, the General Data Privacy Regulation, did for much of the tech industry.”
Managing Expectations: Explainable A.I. and its Military Implications - Observer Research Foundation (August 24th, 2022)
“The rapid uptake of artificial intelligence (AI) in the military in the past couple decades has been accompanied by a slow but gradual build-up in attempts to understand how these AI systems work to achieve better results in military operations. The idea behind what is called ‘eXplainable AI’ (XAI), and the technologies driving it, are a manifestation of this trend. The question, however, is if XAI in its current form is the solution that it is expected to be. Modelled as a scoping exercise, this brief seeks to cover each of these aspects and explore the wider implications of the use of XAI in the military, including its development, deployment, and governance.”
UK Government Sets Out Proposals for a New AI Rulebook - Society for Human Resource Management (August 25th, 2022)
“On July 18, the U.K. government published a policy paper titled "Establishing a pro-innovation approach to regulating AI" (the "Paper"). Instead of giving responsibility for AI governance to a central national regulatory body, as the EU is planning to do through its draft AI Act, the government's proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings to ensure that the U.K.'s AI regulations can keep pace with change and don't stand in the way of innovation.
On the same date, the U.K. Government published its AI Action Plan, summarizing the actions taken and planned to be taken to deliver the U.K.'s National AI Strategy (the "AI Strategy").”