Astra is Constellation’s flagship fellowship, built to accelerate AI safety research and talent.
AI is advancing faster than any technology in history, and we think it's worth preparing for some of its most concerning risks. Astra helps close this gap by bringing exceptional people into the field and connecting them with leading mentors and research opportunities.
Applications close: October 10, 2025, 11:59pm Anywhere on Earth
Astra is a fully funded, 3-6 month, in-person program at Constellation’s Berkeley research center. Fellows advance frontier AI safety projects with guidance from expert mentors and dedicated research management and career support from Constellation’s team.
Over 80% of Astra’s first cohort are now working full-time in AI safety roles in organizations such as Redwood Research, METR, Anthropic, OpenAI, Google DeepMind, the Center for AI Standards and Innovation, and the UK AI Security Institute. This round, we want to go further: placing even more people into the highest-impact roles, and helping fellows launch new initiatives to tackle urgent but neglected problems.
Use of organizational logos does not imply current affiliation with or endorsement by these organizations.
Advances policy research to clarify strategies for mitigating critical risks from powerful AI systems.
Additional mentors from leading AI policy think-tanks
Investigates AI-related security risks, including related vulnerabilities, misuse, and failure modes.
Additional mentors from leading AI security institutions.
Focuses on machine learning research in core technical safety challenges, such as alignment, control, safeguards, dangerous capability evaluations, scalable oversight, and studying complex system behaviors.
Explores conceptual and strategic questions related to potential catastrophic AI risks, aiming to generate practical approaches for steering toward better futures involving AI.
Helps build the AI safety field’s capacity by addressing talent and coordination gaps, launching new projects, and convening key decision-makers.
“Astra was a really important program for us. We first started working with Daniel Kokotajlo through Astra, and the scenario we started developing during the program eventually became AI 2027. After Astra, we also co-founded the AI Futures Project together.”
Eli Lifland & Romeo Dean
Authors on AI 2027 & AI Futures Project Co-founders
"I worked with METR during Astra, and joined METR’s policy team immediately after the fellowship. Participating in Astra directly led to my current role.”
“Astra was an incredibly important opportunity. I was able to work closely with my mentor Buck, who taught me a lot about doing good research. That eventually led me to my role at Redwood Research, where I now run an entire team.”
"I'm really glad I participated in Astra! Endless conversations (both with my mentor and everyone else at Constellation), ranging from theoretical alignment to frontier governance, were an invaluable source of learning, knowledge and opportunities."
Fellows publish research and present outcomes. Constellation provides dedicated support, aimed at helping fellows with:
Fellows may also qualify for a program extension of up to 3 months, depending on performance.
We provide the resources and support needed for fellows to pursue full-time research:
We’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI.
You may be a strong fit if you:
Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.
Applications for Astra are now open!
We’re excited to receive your application—and to see the impact you could make as part of Astra 2026!