September AI Strategies Newsletter
AI Infrastructures and Their Consequences in Global Contexts
This project looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in both national and global contexts, in order to better understand the effects of comparative AI contexts for security. Learn more at our website.
Transdisciplinary Research: Process and Product (but mostly process)
Jesse Kirkpatrick, Co-Investigator
Like many in our nation’s capital, our blog took a vacation in August. The last several weeks have been gratifying and productive for the AI Infrastructures team. Before turning to this month’s theme, I want to provide a brief snapshot. Many of us began our fall teaching, some continued their graduate education, and others embarked on international travel for this project. All these activities dovetail with our project’s overall objective of analyzing determinants of AI infrastructures. This is particularly true of our team’s first conference presentation, “The Cultural, Economic, and Institutional Determinants of AI Infrastructures and their Consequences in Global Contexts,” at this year’s 2022 Minerva Trust and Influence Program Review. This was a whole team effort, and J.P., Amarda, and Caroline presented some preliminary results on international diffusion and spillover effects of AI infrastructures.
I could fill several blog post’s worth of key takeaways from this experience, but I won’t. So be on the lookout for future articles, presentations, blog posts, social media updates, and team engagement with popular media. The best way to stay up-to-date is by subscribing to our newsletter. Instead, this month I want to extend a line of thought that our Principal Investigator, J.P. Singh, addressed in a previous post, the theme of transdisciplinary research. More specifically, I’ll be focusing on the importance of both a project’s process (the research process) and its product (the research outputs), paying particular attention to process. I’ll be posting more fully on product, and our actual products, in the future.
What do I mean by process? Nothing fancy; I use the term here in keeping with everyday colloquial usage—a set of steps used to achieve an objective or goal. With respect to a research project, and AI Infrastructures in particular, I mean the whole kit—from tail to tip, boot to heel—the time spanning project ideation to its completion.
Before I share my reflections on the topic, I’ve a few things to note at the outset. I’m borrowing this phrasing from Stanford’s David Relman, a leading expert in infectious diseases, microbiology, and biosecurity, whom I had the privilege of working with for several years on a biosecurity project. When asked at a workshop to offer some reflections on our project, he noted that the process was just as important as the product. Like AI Infrastructures, our biosecurity team was transdisciplinary, working through the kinks and bumps and benefits that such research brings. It was rewarding, and tiring, and fun. This post isn’t about me, but it draws on my experience and reflections as they relate to this project. Full disclaimer: I approach this post, and this project, as a drank the Kool-Aid transdisciplinary research enthusiast.
Three reflections on process
While you can’t Moneyball your way to successful process and product, you can and should be intentional about formulating the team. This means paying particular attention to disciplinary expertise, norms, and biases; team interactions and ways of working, including basic research practices, writing, and authorship; differing uses of terminology, particularly as their understood within and across disciplines. All of these can be a moving target that requires flexibility; they are not pre-ordained or static. J.P. discussed in July’s post the composition of our team, each member’s background, and the expertise they bring. It’s clear that we are more than the sum of our parts, as any effective team must be, whether its badminton, bowling, or bridge. This means that how and why our team came together was by design aimed at leveraging our collective expertise; this collective expertise is in turn aimed at creating research outputs that are stronger because of our transdisciplinarity.
Meaningful team interactions are critical to the research process as a whole. The substance and tone of our team dynamics was set early in our process; one example is our frequent all-hands meetings, which are modeled on collaborative and open inquiry. I can’t count the number of times that I’ve stated, “I hadn’t thought of that,” but what I usually mean is, “I wouldn’t have thought of that.” Being in a team that one can be guaranteed to learn from during nearly every interaction is intellectually gratifying and reminds me why I got into this business in the first place. It also develops a team ethos and process that has facilitated sharpening and improvement in such areas as data management, use of software, research protocols, team mentoring, writing for publications, and on and on. The bottom line is that the process of team interactions can lead to stronger research products.
The right process ensures you don’t just haggle over price, you can check the whole inventory. If you caught our presentation at the Minerva Review, and as you’ll see in future team products, we strive to not only leverage our individual and collective expertise, but also push each other to test our disciplinary norms, problematic biases, and assumptions. These may involve our discipline’s norm regarding the use of research tools (e.g., transcription software) or its norms and assumptions around team research, or even co-authorship (e.g., both remain the rare exception in philosophy). My point here is that because we’ve been intentional about how and why our team was put together, and because we’ve established an ethos of meaningful open inquiry, we are well-positioned to ensure that we draw from disciplinary best practices and norms that are in service of creating the best research outputs. We can both haggle over price (disagree and determine what tool or method should be used to accomplish an objective) and check the whole inventory (what is the basis for why this objective or tool or method is integral to the project at all). At hazard of mixing my metaphors, our process helps ensure that we’re not only correctly choosing a joystick over a keyboard, but whether we’re playing the right game on the right console at all.
I’ve been thinking a fair bit about how process can be used to create stronger research in AI, particularly as it relates to ethics and security. And although I didn’t address it here, I’m keen on exploring how different processes can be used as testbeds for Responsible AI. If this or anything in the post grabs you, feel free to comment or reach out directly.
Trending News:
Shenzhen aims to be China’s artificial intelligence hub with special guideline to boost development and secure privacy - South China Morning Post (September 7th, 2022)
“Shenzhen has released the country’s first-ever local regulation dedicated to boosting the development of artificial intelligence (AI), as the city steps up efforts to supercharge its hi-tech sector.
The regulation, which takes effect in November, seeks to promote the AI industry by encouraging government agencies to be the early adopters of related technologies and enhancing financial support to AI research in the city. In particular, the Shenzhen government will set up public data sharing rules and open certain types of data to businesses and institutions working in the industry.”
Regulating Artificial Intelligence – Is Global Consensus Possible? - Forbes (September 9th, 2022)
“Artificial Intelligence has become commonplace in the lives of billions of people globally. Research shows that 56% of companies have adopted AI in at least one function, especially in emerging nations. That’s six percent more than in 2020. AI is used in everything from optimizing service operations through to recruiting talent. It can capture biometric data and it already helps in medical applications, judicial systems, and finance, thus making key decisions in people’s lives.
But one huge challenge remains to regulate its use. So, is a global consensus possible or is a fragmented regulatory landscape inevitable?”
Ethical AI and the future of diversity in public policy - The Hill - Reps. Joyce Beatty (D-Ohio) and Yvette D. Clarke (D-N.Y.) Opinion Contributors (August 24th, 2022)
“The key to successfully deploying ethical AI for DEI efforts is to deploy task forces, creators and champions who level the playing field. The House Financial Services Subcommittee on Diversity and Inclusion works to envision a more representative House of Representatives. The Algorithmic Accountability Act is legislation that will bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives.”
Reconciling the AI Value Chain with the EU’s Artificial Intelligence Act - Centre for European Policy Studies (September 30th, 2022)
“The EU Artificial Intelligence Act (AI Act), proposed by the European Commission in April 2021, is an ambitious and welcome attempt to develop rules for artificial intelligence, and to mitigate its risks. The current text, however, is based on a linear view of the AI value chain, in which one entity places a given AI system on the market and is made accountable for complying with the regulation whenever the system is considered ‘high risk’. In reality, the AI value chain can present itself in a wide variety of configurations. In this paper, in view of the many limitations of the Act, we propose a typology of the AI value chain featuring seven distinct scenarios, and discuss the possible treatment of each one under the AI Act. Moreover, we consider the specific case of general-purpose AI (GPAI) models and their possible inclusion in the scope of the AI Act, and offer six policy recommendations.”