The Future of Life Institute (FLI) is looking for an AI Policy Advocate. The role involves careful analysis of how changes in the Brussels’ policy landscape affect the governance of AI, and drawing up relevant policy proposals. This job posting presents a unique opportunity to join a small, dynamic and growing non-profit.
The overarching goal of FLI in Europe is to promote the regulation of autonomous weapons and to mitigate the (existential) risk of increasingly powerful artificial intelligence. The rationale for this work has been set out by our external advisor Professor Stuart Russell in his first BBC Reith Lecture.
Type of employment: Full-time
We want to find the best people to hire and don’t want to be biased in our recruitment. We strongly encourage people of every colour, orientation, age, gender, origin, and ability to apply. If you are passionate about FLI’s mission and think you have what it takes to be successful in this role even if you don’t check all the boxes, please apply. We’d appreciate the opportunity to consider your application. Our intention when hiring is to optimise for talent, values and potential rather than experience.
Please apply by filling out and submitting an application using the "Apply Now" button. The deadline is Thursday the 30th of March, 2023.
After an initial round of screening based on your CV and application form, successful candidates will be asked to prepare a policy two-pager ahead of at least two rounds of interviews. We expect to be able to make a job offer towards late April.
If you have any questions about this role, please reach out to FLI's Director of Policy through jobsadmin@futureoflife.org.
FLI is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grant making, educational outreach, and policy engagement.
The work of our policy team focuses on the risks and benefits of AI. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap. FLI also recently announced a $25M multi-year grant program aimed at reducing existential risk. Our first grant program part of that, AI Existential Safety Program, launched at the end of 2021.
In Europe, FLI has two key priorities: i) mitigate the (existential) risk of increasingly powerful artificial intelligence and ii) regulate lethal autonomous weapons. You can find some of our EU work here: position papers on the liability directive and on the EU AI Act, a dedicated website to provide easily accessible information on the AI Act, and a paper about manipulation and the AI Act.
Our work has been covered in various media outlets in Europe: Wired (UK), SciècleDigital (France), Politico (EU), ScienceBusiness (EU), NRC Handelblad (Netherlands), Frankfurter Allgemeine, Der Spiegel, Tagesspiegel (Germany).