Goodfire

AI Interpretability Research Lab — Reverse-Engineering Neural Networks to Make AI Safe and Controllable

Frontier AI safety research with world-class pedigree. Choose Goodfire if you want to work on mechanistic interpretability alongside researchers from the top AI labs — but go in knowing it's early-stage with all that entails.

2023
Founded
San Francisco
Headquarters
~51
Employees
$1.25B
Valuation (2026)
Free Work at Goodfire? Claim this profile
Update your company's culture data, respond to reviews, and feature your open roles prominently.
Culture Overview

What it's really like to work at Goodfire

Goodfire is a San Francisco-based AI research lab and public benefit corporation focused on mechanistic interpretability — understanding how neural networks actually work at a fundamental level. Founded in 2023 by Eric Ho, Dan Balsam, and Tom McGrath (formerly of DeepMind's interpretability team), the company has raised $209M including a $150M Series B at a $1.25B valuation in February 2026.

The team of ~51 includes researchers from OpenAI, DeepMind, Harvard, and Stanford. Anthropic is both an investor and a validation of Goodfire's mission — making AI systems understandable and controllable. As a public benefit corporation, the mission isn't just marketing — it's legally embedded in the company structure.

Estimated Ratings

Goodfire Estimated Ratings & Culture Assessment

4.0
Estimated Overall Score
No Glassdoor reviews available (estimated ratings)
Overall Rating 4.0
Work-Life Balance 3.5

Goodfire has no Glassdoor reviews yet. Ratings above are estimated based on company stage, funding, and comparable AI research labs. We'll update this section when real employee reviews become available.

Culture Assessment

Working at Goodfire: Pros & Cons

What stands out

  • World-class research team — ex-OpenAI, DeepMind, Harvard, Stanford
  • Frontier AI interpretability work — making neural networks understandable
  • Public benefit corporation — mission-driven by design, not just marketing
  • $209M raised with Anthropic as an investor — strong validation and runway

What to consider

  • Very early stage (~51 people) — expect rapid change and ambiguity
  • No Glassdoor reviews yet — limited visibility into employee experience
  • Interpretability is a niche field — may limit career mobility outside the space
  • San Francisco only — no remote roles mentioned
Research & Engineering Culture

How the research team works

Research Focus

Interpretability Mechanistic Analysis AI Safety

Team Background

Researchers from OpenAI, DeepMind, Harvard, and Stanford. Chief Scientist Tom McGrath led interpretability research at DeepMind before co-founding Goodfire.

Company Structure

Public benefit corporation — safety mission is legally embedded. Small team of ~51 with flat structure and high individual ownership across research and engineering.

Funding & Backing

$209M total funding: $7M seed (Lightspeed), $50M Series A (Menlo + Anthropic), $150M Series B (B Capital). Valued at $1.25B as of February 2026.

Open Positions

Join the Goodfire team

... open positions.

🌐 Remote
✕ Clear
🏢

Claim Goodfire

Take ownership of your company's culture profile. Update your data, respond to community sentiment, and feature your open roles to candidates who care about culture.

Request received!

We'll review your request and get back to you within 24 hours at the email you provided.