HomeJobsAnthropic › Communications

Research Operations, External Artifacts

Anthropic Remote-Friendly, United States Full-time Communications Posted May 5, 2026
Apply Now →

What it’s like to work at Anthropic

AI Safety · San Francisco / Remote

4.4
Employee Rating
3.7
Work-Life Balance
428
Open Roles
ethical-ailearningequitysocial-impacteng-driven

What employees love

  • Mission-driven to the core — the safety focus is genuine and deeply embedded
  • Incredible autonomy and ownership, even for mid-level engineers

What could be better

  • High-intensity environment — extended work hours are common during peak periods
  • Processes still catching up to hypergrowth; some things feel ad-hoc
View full Anthropic culture profile →

About the Role

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

Every time Anthropic releases a model, we publish a system card: a long-form technical document that describes the model's capabilities, safety properties, evaluation results, and the reasoning behind our deployment decisions. System cards are some of the most consequential and widely read documents we produce, and they are one way we hold ourselves publicly accountable for the safety claims we make.

We're hiring a Research Operations Specialist to help own system card production. You'll work embedded with research and safety teams through each launch, coordinating contributions from dozens of researchers, holding the schedule and the open-threads list, and making sure the document ships on time as a single, accurate, internally consistent whole. Along the way you'll do real editorial work: turning results and researcher notes into clear, honest prose and pushing back when an explanation doesn't hold together.

System cards sit within a wider family of external safety artifacts, including risk reports and Responsible Scaling Policy updates. Part of this role is keeping the system card consistent with those documents so that Anthropic's public safety story reads as one coherent account rather than several.

This role sits in Research Operations and works closely with Alignment, Safeguards, Frontier Red Team, and capabilities research. The core of the job is part project management, part translation: keeping a complex, many-author, hard-deadline document on track while making frontier safety research legible to researchers, policymakers, journalists, and the public — without sacrificing precision.

Key responsibilities:

Minimum qualifications:

Preferred qualifications:

The annual compensation range for this role is listed below.

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$260,000$310,000 USD

Logistics

Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience

Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Similar Roles

More at Anthropic
Communications Manager, Research
New York City, NY; San Francisco, CA; Seattle, WA
ANZ Communications Lead
Sydney, Australia
Communications Lead, Claude Code
San Francisco, CA
Engineering Editorial Lead
San Francisco, CA | New York City, NY
Internal Communications Manager, Policy
San Francisco, CA
Similar roles at other companies
Member of Technical Staff, Modeling
Cohere · London
Member of Technical Staff (AI Research Lead)
Perplexity AI · San Francisco
AI Scientist - Audio
Mistral AI · Paris
PhD GenAI Research Scientist Intern
Databricks · San Francisco, California
Data Scientist
Stripe · N/A

Frequently Asked Questions

What is the work-life balance like at Anthropic?
Anthropic has a work-life balance score of 3.7/5 based on employee reviews. This is about average for the AI/tech industry.
What is Anthropic’s culture like?
Anthropic is characterized by these culture values: ethical-ai, learning, equity, social-impact, eng-driven. Based on employee reviews, the company has an overall rating of 4.4/5. Mission-driven to the core — the safety focus is genuine and deeply embedded
How many open roles does Anthropic have?
Anthropic currently has 428 open roles across departments including engineering, product, sales, and more. Roles are refreshed daily from their careers page.
Is this role remote-friendly?
This role is located in Remote-Friendly, United States. Check the job description above for specific location and remote work details.
Apply for this role at Anthropic →