San Francisco, CA
Anthropic is looking for a Research Engineer to join the Frontier Red Team to build and evaluate model organisms of autonomous systems and develop defensive agents. This role focuses on AI safety research, security, and policy with the goal of preparing for advanced AI systems. The Research Engineer will design autonomous AI systems, create evaluations, and collaborate with experts.
Python, LLM-based agents, Autonomous systems, Software engineering, Reinforcement learning, Self-play, Multi-agent systems, Robotics, Hardware interfaces, Cyberphysical systems, AI safety research
competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues