Jonas Vollmer

7503 karmaJoined Oct 2014Berkeley, CA, USA



I’ve helped set up the Atlas Fellowship, a program that researches talent search and scholarships for exceptional students.

Previously, I ran EA Funds and the Center on Long-Term Risk. My background is in medicine (BMed) and economics (MSc). See my LinkedIn.

You can best reach me at jonas@atlasfellowship.org.

I appreciate honest and direct feedback: https://admonymous.co/vollmer

Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)


Topic Contributions

I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.

One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.

If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://forum.effectivealtruism.org/posts/ya5Aqf4kFXLjoJeFk/atlantis-berkeley-event-venue-available-for-rent 

(FYI, Atlas won't be ending up with a budget shortfall as a result of this.)

This seems the most plausible speculation so far, though probably also wrong: https://twitter.com/dzhng/status/1725637133883547705

[This comment is no longer endorsed by its author]Reply

(Shorting TLT seems a reasonably affordable way to implement this strategy I guess, though you're only going short nominal interest rates.)

The really important question that I suspect everyone is secretly wondering about: If you book the venue, will you be able to have the famous $2,000 coffee table as a centerpiece for your conversations? I imagine that after all the discourse about it, many readers may feel compelled to book Lighthaven to see the table in action!

I think there aren't really any attendees who are doing meta work for a single cause. Instead, it seems to be mostly people who are doing meta work for multiple areas.

(I also know of many people doing AI safety meta work who were not invited.)

Yeah, I disagree with this on my inside view—I think "come up with your own guess of how bad and how likely future pandemics could be, with the input of others' arguments" is a really useful exercise, and seems more useful to me than having a good probability estimate of how likely it is. I know that a lot of people find the latter more helpful though, and I can see some plausible arguments for it, so all things considered, I still think there's some merit to that.

Load more