Strategy Researcher

Constellation is looking for people to do semi-independent research on AI safety macrostrategy, part-time or full-time, for 1-3 months. 

Note: This position is no longer open.

  • On-site: Berkeley, CA
  • Remote
  • Part-time

The goal is to produce a broad overview of the most promising interventions for mitigating catastrophic risks from the development of transformative AI. We expect a focus on analyzing, aggregating, and synthesizing existing proposals, as opposed to developing novel interventions.  We are most interested in work describing portfolios of interventions that might collectively mitigate all such risks, but would consider proposals from candidates interested in focusing on a broad subset of TAI risks (e.g. “loss of control” or “concentration of power”) or interventions (e.g. policy and/or governance interventions).

The output of the engagement will be a formal or informal publication (e.g. blog post) which documents your work. Your analysis will inform Constellation’s strategic planning, including which field-building programs we prioritize.

Work will be fairly independent. We will supply a more specific prompt in line with the goals described above, and offer periodic feedback and questions, but you will be expected and encouraged to pursue your work as you see fit within the broad constraints of the prompt.

Compensation and Logistics

We expect to structure this engagement as an hourly contract. Rates will depend on experience, but will likely be between $50/hr and $150/hr.

This is a temporary position for 1-3 months with a flexible schedule. Full-time or part-time options are available. We are primarily an onsite team based in Berkeley, and the Constellation workspace hosts many top safety researchers. We will, however, consider remote candidates for this particular engagement.

If there is a strong mutual fit, we may be interested in extending the engagement or converting this to an ongoing, full-time role, though this scenario should be considered exploratory and is also dependent on other factors. We may be able to sponsor visas for such cases. Ongoing, full-time roles would be onsite in Berkeley, CA. Our office is located under 20 seconds from the nearest BART (metro) stop, and on-site parking is also available. 

‍We value diversity in all respects and base our hiring decisions on the needs of the organization and individual qualifications.  We welcome applicants from all backgrounds, regardless of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age or disability.

Skills

You’re ideal for this engagement if you have:

  • Extensive domain expertise in AI safety, especially broad expertise thinking holistically about threat models and mitigations
  • Exceptional epistemic clarity, including reasoning with and communicating about uncertainty
  • The ability to identify and build from the fundamental assumptions or elements of a problem
  • Experience reasoning about practical constraints such as decision-making processes, organizational dynamics, international politics, and project planning
  • The ability to create clarity for others in complex conceptual terrain (through precise language, intuitive models and frameworks, etc.)
  • Very clear and concise writing
  • Experience successfully managing your own output on long projects with limited direction