"Having research chats with people I met at Constellation has given rise to new research directions I hadn't previously considered, like model organisms. Talking with people at Constellation is how I decided that existential risk from AI is non-trivial, after having many back and forth conversations with people in the office. These updates have had large ramifications for how I’ve done my research, and significantly increased the impact of my research."
“Speaking with AI safety researchers in Constellation was an essential part of how I formed my views on AI threat models and AI safety research prioritization. It also gave me access to a researcher network that I've found very valuable for my career.”
“I worked from Constellation for a bit over a week this summer, and can highly recommend it! The main value add was chatting with (and listening in on lunchtime chats between) many top AI safety researchers, and learning what people are working on and thinking about in a way that isn't possible by reading their work online. I was really productive during my time there.”
Note: This list is not exhaustive, so you still may be a good fit even if you don't work on any of the following.
More speakers to come
Director, Alignment Research Center
Director, ARC Evals
Research Lead, OpenAI
Head of Alignment Science, Anthropic
Senior program officer, Open Philanthropy Project
If your question isn't answered here, please reach out to firstname.lastname@example.org
Applications are due by November 17th, 2023