AI is looking like the most important technology of this century. Decisions made in the next few years – by governments, companies, and international bodies – will shape its impact. Many of these decisions will be made quickly, under uncertainty, and with incomplete information. Better research and analysis can meaningfully improve the quality of these decisions. Getting AI governance right isn't sufficient for good outcomes, but getting it badly wrong could foreclose many futures. I try to make that research and analysis happen. |
Hello!
I'm Markus Anderljung (you can pronounce it "ander-young"), Director of Policy and Research at the Centre for the Governance of AI (GovAI).
We produce research and advice to help governments, AI companies, and other stakeholders ensure the safe and beneficial development of transformative AI systems. I'm also an Adjunct Fellow at the Center for a New American Security and a member of the OECD AI Policy Observatory's Expert Group on AI Futures. Previously, I've served as one of the Vice-Chairs drafting the EU's Code of Practice for General Purpose AI. I've also spent time time seconded to the UK Cabinet Office as a Senior AI Policy Specialist advising on the UK's regulatory approach to AI. My work examines the impacts and governance of the most capable AI systems available today, and how society can prepare for even more capable systems over the coming decades. I'm currently thinking about what regulation should be imposed on frontier AI systems, how to assess risks produced by AI models, and computational resources as a tool for AI governance. I'm based in London, UK. I recently moved from San Francisco, California, and grew up in Stockholm, Sweden. |