I’ve helped set up the Atlas Fellowship, a program that researches talent search and scholarships for exceptional students.
Previously, I ran EA Funds and the Center on Long-Term Risk. My background is in medicine (BMed) and economics (MSc). See my LinkedIn.
You can best reach me at jonas@atlasfellowship.org.
I appreciate honest and direct feedback: https://admonymous.co/vollmer
Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)
I think there aren't really any attendees who are doing meta work for a single cause. Instead, it seems to be mostly people who are doing meta work for multiple areas.
(I also know of many people doing AI safety meta work who were not invited.)
Yeah, I disagree with this on my inside view—I think "come up with your own guess of how bad and how likely future pandemics could be, with the input of others' arguments" is a really useful exercise, and seems more useful to me than having a good probability estimate of how likely it is. I know that a lot of people find the latter more helpful though, and I can see some plausible arguments for it, so all things considered, I still think there's some merit to that.
How to fix EA "community building"
Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here's what I sent them, very quickly written and only lightly edited:
Hard to summarize quickly, but here's some loose gesturing in the direction:
- We should stop thinking about "community building" and instead think about "talent development". While building a community/culture is important and useful, the wording overall sounds too much like we're inward-focused as opposed to trying to get important things done in the world.
- We should focus on the object level (what's the probability of an extinction-level pandemic this century?) over social reality (what does Toby Ord think is the probability of an extinction-level pandemic this century?).
- We should talk about AI alignment, but also broaden our horizon to not-traditionally-core-EA causes to sharpen our reasoning skills and resist insularity. Example topics I think should be more present in talent development programs are optimal taxation, cybersecurity, global migration and open borders, 1DaySooner, etc.
- Useful test: Does your talent development program make sense if EA didn't exist? (I.e., is it helping people grow and do useful things, or is it just funnelling people according to shallow metrics?)
- Based on personal experience and observations of others' development, the same person can have a much higher or much lower impact depending on the cultural environment they're embedded in, and the incentives they perceive to affect them. Much of EA talent development should be about transmitting a particular culture that has produced impressive results in the past (and avoiding cultural pitfalls that are responsible for some of the biggest fuckups of the last decade). Shaping culture is really important, and hard to measure, and will systematically be neglected by talent metrics, and avoiding this pitfall requires constantly reminding yourself of that.
- Much of the culture is shaped by incentives (such as funding, karma, event admissions, etc.). We should be really deliberate in how we set these incentives.
Feeling a bit tired to type a more detailed response, but I think I mostly agree with what you say here.
Hmm, I personally think "discover more skills than they knew. feel great, accomplish great things, learn a lot" applies a fair amount to my past experiences, and I think aiming too low was one of the biggest issues in my past, and I think EA culture is also messing up by discouraging aiming high, or something.
I think the main thing to avoid is something like "blind ambition", where your plan involves multiple miracles and the details are all unclear. This seems also a fairly frequent phenomenon.
If there's no strategy to profitably bet on long-term real interest rates increasing, you can't infer timelines from real interest rates. I think the investment strategies outlined in this post don't work, and I don't know if there's a strategy that works.
I want to caution against the specific trading strategies suggested in this post:
Even without these fees, the investments would not be a slam dunk compared to buying stocks, as pointed out here. But the fees really make this look very unattractive.
Accounting for fees, it's actually a lot worse than you wrote, see here: https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or?commentId=LmJ6inhSESoJGyWjQ
I wouldn't be too surprised if someone on the GAP leadership team had indeed participated in an illegal straw donor scheme, given media reports and general impressions of how recklessly some of the SBF-adjacent politics stuff was carried out. But, I do think the specific title allegation is worded too strongly and sweepingly given the lack of clear evidence, and will probably turn out to be wrong.
The really important question that I suspect everyone is secretly wondering about: If you book the venue, will you be able to have the famous $2,000 coffee table as a centerpiece for your conversations? I imagine that after all the discourse about it, many readers may feel compelled to book Lighthaven to see the table in action!