The Constellation Visiting Researcher Program provides an opportunity for around 20 visitors motivated by reducing catastrophic risks from AI to connect with leading AI safety researchers, exchange ideas, and find collaborators while continuing their research from our offices in Berkeley, CA. The funded program will take place this winter from the 8th of January 2024 to the 1st of March 2024 (start/end dates flexible).
"Having research chats with people I met at Constellation has given rise to new research directions I hadn't previously considered, like model organisms. Talking with people at Constellation is how I decided that existential risk from AI is non-trivial, after having many back and forth conversations with people in the office. These updates have had large ramifications for how I’ve done my research, and significantly increased the impact of my research."
“Speaking with AI safety researchers in Constellation was an essential part of how I formed my views on AI threat models and AI safety research prioritization. It also gave me access to a researcher network that I've found very valuable for my career.”
“I worked from Constellation for a bit over a week this summer, and can highly recommend it! The main value add was chatting with (and listening in on lunchtime chats between) many top AI safety researchers, and learning what people are working on and thinking about in a way that isn't possible by reading their work online. I was really productive during my time there.”
Note: This list is not exhaustive, so you still may be a good fit even if you don't work on any of the following.
More speakers to come
Paul Christiano
Director, Alignment Research Center
Beth Barnes
Director, ARC Evals
Jeff Wu
Research Lead, OpenAI
Sam Bowman
Head of Alignment Science, Anthropic
Ajeya Cotra
Senior program officer, Open Philanthropy Project
Owain Evans
Research Lead
Constellation is a research center dedicated to safely navigating the development of transformative AI. We host a number of organizations, teams and individuals working on topics including alignment, dangerous capability evaluations, and AI governance, in addition to running field-building programs such as this one.
Applicants should be full-time researchers interested in reducing catastrophic risks from AI, and either working on one of the research topics listed or something else related to AI catastrophic risk reduction.
While we expect many participants to be postdocs or PhD students, we also welcome other applicants such as researchers in industry, senior scientists, principal investigators / professors, and independent researchers.
If you are uncertain about your fit, we encourage you to err on the side of applying. We especially encourage women and underrepresented minority groups to apply. Additionally, we welcome applications from researchers who are not US citizens or green card holders.
We plan to mostly accept researchers who are already working on projects within AI safety, but we’re open to exceptional candidates who are interested in transitioning their work towards AI safety.
The program is designed to be flexible (e.g. participants who can only work from Constellation for two days per week are still encouraged to apply) and to allow participants to continue working on their research full-time. There will not be any mandatory activities.
We will provide housing in Berkeley for the duration of the program. Participants will work from a shared office space (Constellation). Lunch and dinner will be provided each day, which is a good opportunity to discuss research with other visitors as well as people in Constellation. There will be regular invited talks from senior AI safety researchers, opportunities to share your research, and other programming.
The application process for the Constellation Visiting Researcher Program requires candidates to:
Applications will be processed on a rolling basis up until the deadline, November 17th. We plan to get back to all candidates by December 1st, though we expect to get back to many candidates sooner than this, especially those with impending deadlines. We will accept late applications, though we can't guarantee we will respond to them.
While we prefer if participants are present for the full duration of the program from January through February, we can accommodate variable start/end dates as long as you are able to join for the majority of the program. If you aren’t available for any of the dates, we still recommend filling out the application since we might be able to host visiting researchers at other times of year.
We hope to help visitors make intellectual progress on AI safety and to make Constellation a vibrant & productive space to work in. We also hope to develop a large network of regular visitors to Constellation with the shared goal of reducing risks from AI.
We will cover reasonably-priced travel to and from Berkeley, CA in addition to housing for the time you are here. Constellation also provides lunch and dinner during weekdays. Feel free to email programs@constellation.org with any questions.
Yes. Please email programs@constellation.org with their name, email (if you’d like us to reach out) and optionally, a short sentence on why you think they’d be a good fit.
Applications are due by November 17th, 2023