Collated and Edited by: Moritz von Knebel and Markus Anderljung More and more people are interested in conducting research on open questions in AI governance. At the same time, many AI governance researchers find themselves with more research ideas than they have time to explore. We hope to address both these needs with this informal collection of 78 AI governance research ideas.
About the collection There are other related documents out there, e.g. this 2018 research agenda, this list of research ideas from 2021, the AI subsection of this 2021 research agenda, this list of potential topics for academic theses and a more recent collection of ideas from 2024. This list differs in being (i) more recent and (ii) being focused on collating research ideas, rather than questions. It’s a collection of research questions along with hypotheses of how the question could be tackled. By: Markus Anderljung The Labour government has committed to introduce legislative requirements on “the developers of the most powerful AI systems,” such as OpenAI, Google DeepMind, Anthropic, xAI, and Meta[1]. These systems are often referred to as “frontier AI”: the most capital-intensive, capable, and general AI models, which currently cost 10-100 million dollars to train.
Frontier AI systems are rapidly improving. Their continued development will have wide-ranging societal effects, creating new economic growth opportunities but also serious new risks. With these new legislative requirements, the government will aim to prevent the deployment of systems that pose unacceptable risks to public safety. Cullen O’Keefe, Jade Leung, Markus Anderljung[1] [2]
Summary Standard-setting is often an important component of technology safety regulation. However, we suspect that existing standard-setting infrastructure won’t by default adequately address transformative AI (TAI) safety issues. We are therefore concerned that, on our default trajectory, good TAI safety best practices will be overlooked by policymakers due to the lack or insignificance of efforts which identify, refine, recommend, and legitimate TAI safety best practices in time for their incorporation into regulation. Given this, we suspect the TAI safety and governance communities should invest in capacity to influence technical standard setting for advanced AI systems. There is some urgency to these investments, as they move on institutional timescales. Concrete suggestions include deepening engagement with relevant standard setting organizations (SSOs) and AI regulation, translating emerging TAI safety best practices into technical safety standards, and investigating what an ideal SSO for TAI safety would look like. |