HomeJobsGoogle DeepMind › GenAI

Research Engineer, Frontier Safety Risk Assessment

Google DeepMind London, UK; New York City, New York, US; San Francisco, California, US Full-time GenAI Posted Jan 26, 2026
Apply Now →

What it’s like to work at Google DeepMind

AI Research Lab · London, UK

4.2
Employee Rating
4
Work-Life Balance
70
Open Roles
deep-worklearningeng-drivendiverseethical-aiequityflex-hours

What employees love

  • World-class AI research environment with brilliant, collaborative colleagues
  • Google-level compensation and benefits with genuine work-life balance

What could be better

  • Google-scale bureaucracy — slow decision-making compared to startups
  • Large org means less individual ownership and impact on direction
View full Google DeepMind culture profile →

About the Role

Snapshot

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

Our team identifies, assesses, and mitigates potential catastrophic risks from current and future AI systems. As a member of technical staff, you will design, implement, and empirically validate approaches to assessing and managing catastrophic risk from current and future frontier AI systems. At the moment, these risks range from loss of control of advanced AI systems or automated ML R&D through misuse of AI for widespread CBRN or cyber harm.

About Us

The Risk Assessment team measures and assesses the possible risks posed by frontier systems, making sure that GDM knows the capabilities and propensities of frontier models so that adequate mitigations are in place. We also make sure that the mitigations do enough to manage the risks.

But the risks posed by frontier systems are, themselves, unclear. Forecasting the possible risk pathways is challenging, as is designing and implementing sensors that could reliably detect emerging risks before we actually have real-world examples. We focus on building decision-relevant and trustworthy evaluation systems that prioritise compute and effort on risk measurements with the highest value of information. We then need to be able to assess the extent to which proposed and implemented mitigations actually cover the identified risks, and to measure how successfully they generalize to novel settings.

The Risk Assessment team is part of Frontier Safety which is responsible for measuring and managing severe potential risks from current and next-generation Frontier models. Our approach is one of adaptively scaling risk assessment and mitigation processes to handle the near-future. We are part of GDM’s AGI Safety and Alignment Team, whose other members focus on research aimed at enabling systems further in the future to be aligned and safe. These include interpretability, scalable oversight, control, and incentives.

The Role

We are seeking 2 Research Engineers for the Frontier Safety Risk Assessment team within the AGI Safety and Alignment Team.

In this role, you will contribute novel research towards our ability to measure and assess risk from frontier models. This might include:

Your work will involve complex conceptual thinking as well as engineering. You should be comfortable with research that is uncertain, under-constrained, and which does not have an achievable “right answer”. You should also be skilled at engineering, especially using Python, and able to rapidly familiarise yourself with internal and external codebases. Lastly, you should be able to adapt to pragmatic constraints around compute and researcher time that require us to prioritise effort based on the value of information.

Although this job description is written for a Research Engineer, all members of this team are better thought of as members of technical staff. We expect everyone to contribute to the research as well as the engineering and to be strong in both areas.

The role will mostly depend on your general ability to assess and manage future risks, rather than from specialist knowledge within the risk domains, but insofar as specialist knowledge is helpful, knowledge in ML R&D and loss of control as risk domains are likely the most valuable.

About You

In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:

You have extensive research experience with deep learning and/or foundation models (for example, but not necessarily, a PhD in machine learning).

In addition, any of the following would be an advantage:

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc.

We are also open to relocating candidates and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility).

The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Similar Roles

More at Google DeepMind
Research Engineer, Applied AI
Singapore
Research Engineer, Multimodal Generative AI (Image/Video)
Kirkland, Washington, US; Seattle, Washington, US
Research Scientist, Gemini Information Tasks
Mountain View, California, US
Research Scientist, Gemini Information Tasks
New York City, New York, US
Research Scientist, Gemini Personal Intelligence
Mountain View, California, US
Similar roles at other companies
Anthropic STEM Fellow
Anthropic · San Francisco, CA
Member of Technical Staff, Modeling
Cohere · London
Member of Technical Staff (AI Research Lead)
Perplexity AI · San Francisco
AI Scientist - Audio
Mistral AI · Paris
PhD GenAI Research Scientist Intern
Databricks · San Francisco, California

Frequently Asked Questions

What is the work-life balance like at Google DeepMind?
Google DeepMind has a work-life balance score of 4/5 based on employee reviews. This is above average, suggesting the company genuinely respects work-life boundaries.
What is Google DeepMind’s culture like?
Google DeepMind is characterized by these culture values: deep-work, learning, eng-driven, diverse, ethical-ai, equity, flex-hours. Based on employee reviews, the company has an overall rating of 4.2/5. World-class AI research environment with brilliant, collaborative colleagues
How many open roles does Google DeepMind have?
Google DeepMind currently has 70 open roles across departments including engineering, product, sales, and more. Roles are refreshed daily from their careers page.
Is this role remote-friendly?
This role is located in London, UK; New York City, New York, US; San Francisco, California, US. Check the job description above for specific location and remote work details.
Apply for this role at Google DeepMind →