Hey there~ I'm Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
I have this impression of OpenPhil as being the Harvard of EA orgs -- that is, it's the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅
When should someone who cares a lot about GCRs decide not to work at OP?
Thanks, really appreciated this post.
In case anyone is looking for a bank recommendation, I would recommend Mercury, for their excellent UX and good pricing model. We use them for both Manifold the for-profit, and Manifold for Charity. They do provide ~5% yield to for-profits through Mercury Treasury (we use a different interest provider but if we could do it over again, we would definitely choose Mercury Treasury instead). Unfortunately, they don't provide Treasury to nonprofits. Mercury can also do payments to intl accounts with a 1% FX exchange rate (worse than Wise, but Wise is kind of a PITA and kicked us off their platform :P). Referral link if interested: https://mercury.com/r/manifund
We do also have Stripe Opal for banking and other kinds of money movement, though that fits Manifold & Manifund because we do a significant amount of programmatic money movements -- most EA orgs won't need that.
I'm grateful for the CEA Community Health team -- interpersonal issues can be tricky to navigate, but the Health team is consistently nice, responsive, helpful and has many useful resources compiled for making good decisions, whether it be about running an event or managing grant dynamics.
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
I've heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I'm not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.
For example, if in 100 projects, you have [1x +1000, 4x -100, 95x ~0], then I think black swarm farming still does a lot better than some process where you try to select the top 10 or something. Meanwhile if your outcomes look more like [2x +1000, 3x -1000, 95x ~0] then careful filtering starts to matter a lot.
My intuition is that the best projects are much better than the worst projects are bad, and also that the best projects don't necessarily look that good at the outset. (To use the example I'm most familiar with, Manifold looked pretty sketchy when we applied for ACX Grants, and got turned down by YC and EA Bahamas; I'm still pretty impressed that Scott figured we were worth funding :P)
I really appreciated this list of examples and it's updated me a bit towards checking in with LTFF & others a bit more. That said, I'm not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like "admitting a Harvard class/YC batch" or more like "pre-seed/seed-stage funding"? In the former case, it's more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are "black swan farming"; the important thing is to not miss out on the one Facebook that 1000xs, and you're happy to fund 99 duds in the meantime.
I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...
I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants
One of my pet ideas is to set up a grantmaker coordination channel (eg Discord) where only grantmakers may post, but anyone may read. I think siloed communication channels are important for keeping the signal to noise ratio high, but 97% of the time I'd be happy to share whatever thoughts we have with the applicant & the rest of the world too.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of "desirability among applicants" as opposed to "established bureaucracy". My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I've heard informal complaints from leaders of other EA orgs, roughly "it's hard to find and keep good people, because our best candidates keep joining OP instead". So I was curious to learn more about OP's internal thinking about this effect.