I commented on a draft of this post. I haven't re-read it in full, so I don't know to what degree my comments were incorporated. Based on a quick glance it seems they weren't, so I thought I'd copy the main comments I left on that draft. My main point is that I think inserting regional groups into the funding landscape would likely worsen rather than improve the funding situation. I still think regional groups seem promising for other reasons.
Some of my comments (copy-paste, quickly written):
[Regarding applying for funding:] At a high level, my guess would
Some further recommendations:
In 80K's The Precipice mailing experiment, 15% of recipients reported reading the book in full after a month, and ~7% of people reported reading at least half.
I'm also aware of some anecdotal cases where books seemed pretty good - e.g., I know of a very promising person who got highly involved with longtermism within a few months primarily based on reading The Precipice.
The South Korea case study is pretty damning, though. I wonder if things would look better if there had been a small number of promising people who help onboard newly interested ones (or wh... (read more)
To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.
As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd... (read more)
Some work that seems relevant:
Strictly speaking, a lot of the examples are outputs or outcomes, not impacts, and some readers may not like that. It could be good to make that more explicit at the top.
I also want to suggest using more imagery, graphs, etc. – more like visual storytelling and less like just a list of bullet points.
I think it's really cool that you're making this available publicly, thanks a lot for doing this!
Great points, thanks for raising them!
One potential takeaway could be that we may want to set up the financial products we'd like to use for hedging ourselves – e.g., by setting up prediction markets for the quantity of oil consumption. (Perhaps FTX would be up for it, though it won't be easy to get liquidity.)
I'm surprised this comment was downvoted so much. It doesn't seem very nuanced, but here's obviously a lot going wrong with modern capitalism. While free markets have historically been a key driver of the decline of global poverty (see e.g. this and this), I don't think it's wrong to say that longtermists should be thinking about large scale economic transition (though should most likely still involve free markets).
I think a downvoters view is that:It packs powerful claims that really need to be unpacked ("unsustainable...massive suffering"), with a backhand against the community ("actually care...claim to") with extraordinary, vague demands ("large economic transition"), all in a single sentence.It's hard to be generous, since it's so vague. If you tried to riff some "steelman" off it, you could work in almost any argument critical of capitalism or even EA in general, which isn't a good sign.
The forum guidelines suggest I downvote comments when I dislike the effect they have on a conversation. One of the examples the guidelines give is when a comment contains an error or bad reasoning. While I think the reasoning in Ruth's comment is fine, I think the claim that capitalism is unsustainable and causes "massive suffering" is an error. Nor is the claim backed up by any links to supporting evidence that might change my mind. The most likely effect of ruth_schlenker's comment is to distract from Halstead's original comment and inflame the discussion, i.e. have a negative effect on the conversation.
A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:
If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.
In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.
I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.
I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many ... (read more)
I have edited all our fund pages to include the following sentence:
Note: We are temporarily unable to display correct fund balances. Please ignore the balance listed below while we are fixing the issue.
I strongly agree with the premise of this post and really like the analysis, but feel unhappy with the strong focus on physical products. I think we should instead think about a broader set of scalable ways to usefully spend money, including but not limited to physical products. E.g. scholarships aren't a physical product, but large scholarship programs could plausibly scale to >$100 million.
(Perhaps this has been said already; I haven't bothered reading all the comments.)
Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line.
I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.)
I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.
I'd be pretty unhappy if such a donor then felt forced to instead do their ... (read more)
I really liked this comment. Three additions:
It's worth pointing out that these questions apply specifically to global health and development, but could be very different in other cause areas.
I don't think question 1 provides evidence that money will do more good in the future. It might even suggest the opposite: As you point out, malaria prevention and deworming might run out of room for more funding, and to me this seems more likely than the discovery of a more cost-effective option that is also highly scalable (beyond >$30 million per year).
I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here.
The model assumes:
In the model, the extra utility from the AI port... (read more)
Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.
I am very excited to announce that we have appointed Max Daniel as the chairperson at the EA Infrastructure Fund. We have been impressed with the high quality of his grant evaluations, public communications, and proactive thinking on the EAIF's future strategy. I look forward to having Max in this new role!
I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority).
One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.
> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).
Just wanted to flag briefly that I personally disagree with this:
(Also agree with Max. Long lead times in academia definitely qualify as a "convincing reason" in my view)
I wouldn't rule it out, but typically we might say something like: We are interested in principle, but would like to wait for another 6-12 months to see how your project/career/organization develops in the meantime before committing the funding (unless there's a convincing reason for why you need the funding immediately).
I'm excited that there's now more work happening on Effective Institutions / IIDM!
Some questions and constructive criticism that's hopefully useful:
The aim was to gauge the diversity of perspectives in the EA community on what “counts'' as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well.
It seems that you're starting out with the assumption that IIDM is a useful... (read more)
I actually think it would be cool to have more posts that explicitly discuss which organizations people should go work at (and what might make it a good personal fit for them).
If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.
FWIW, I also don't really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I'd be keen to hear more about that.
I'd be pretty excited about financially incentivizing people to do more such evaluations. Not sure how to set the incentives optimally, though – I really want to avoid any incentives that make it more likely that people say what we want to hear (or that lead others to think that this is what happened, even when didn't), but I also care a lot about such evaluations being high-quality and and having sufficient depth, so don't want to hand out money for any kind of evaluation.
Perhaps one way is to pay $2,000 for any evaluation or review that receives >120 Karma on the EA Forum (periodically adjusted for Karma inflation), regardless of what it finds? Of course, this is somewhat gameable, but perhaps it's good enough.
Yeah, I plan to keep sending the form around in the coming months. Using the EA Forum question feature is a great idea, too. Thank you!
Thanks a lot for doing this evaluation! I haven't read it in full yet, but I would like to encourage more people to review and critique EA Funds. As the EA Funds ED, I really appreciate it if others take the time to engage with our work and help us improve.
Is there a useful way to financially incentivise these sort of independent evaluations? Seems like a potential good use of fund money
"It's hard to find great grants" seems different than "It's hard to find grants we really like".
I would expect that most grantmakers (including ones with different perspectives) would agree with this and would find it hard to spend money in useful ways (e.g., I suspect that Nuño might say something similar if he were running the LTFF, though not sure). So while I think your framing is overall slightly more accurate, I feel like it's okay to phrase it the way I did.
that they're skeptical of funding independent researchers
I don't think this characterization ... (read more)
In a typical case, it takes a week to complete due diligence, and up to 31 days for the money to be paid out (because we currently do the payouts in monthly batches). So from decision to "money in the bank account" it takes 1–6 weeks, typically 3.5 weeks. I think the country shouldn't matter too much for this. Because most grantees care more about having a definite decision than the money actually arriving in their bank account, this waiting time seemed fine to us (though we're also looking into ways to cut it short).
That said, if the grantseeker indicates that they need the money urgently, and they submit due diligence promptly, the payout can be expedited and should take just a few days.
Thanks, we really hope it will help people like the ones you mentioned!
I like Athena, or Athena Centre!
Here's another comment that goes into this a bit.
In my mind, a significant benefit of impact certificates is that they can feel motivating:
The huge uncertainty about the long-run effects of our actions is a common struggle of community builders and longtermists. Earning to give or working on near-term issues (e.g., corporate farm animal welfare campaigns, or AMF donations) tends to come with a much stronger sense of moral urgency, tighter feedback loops, and a much clearer sense of accomplishment if you actually managed to do something important: 1 million hens are spared from battery cages in country X!... (read more)
Certificates of impact are the best known proposal for this, although they aren't strictly necessary.
I don't understand the difference between certificates of impact and altruistic equity – they seem kind of the same thing to me. Is the main difference that certificates of impact are broader, whereas altruistic equity refers to certificates of impact of organizations (rather than individuals, etc.)? Or is the idea that certificates of impact would also come with a market to trade them, whereas altruistic equity wouldn't? Either way, I don't find it u... (read more)
I include the opportunity cost of the broader community (e.g., the project hires people from the community who'd otherwise be doing more impactful work), but not the opportunity cost of providing the funding. (This is what I meant to express with "someone giving funding to them", though I think it wasn't quite clear.)
This isn't what you asked, but out of all the applications that we receive (excluding desk rejections), 5-20% seem ex ante net-negative to me, in the sense that I expect someone giving funding to them to make the world worse. In general, worries about accidental harm do not play a major role in my decisions not to fund projects, and I don't think we're very risk-averse. Instead, a lot of rejections happen because I don't believe the project will have a major positive impact.
A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner's dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.
You might respond that there's no easy way to ver... (read more)
Here's a toy model:
Increasing the funding from $100B to $200B would then increase utility by 15%.
Thanks, this is useful!
I don't think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didn't allocate more funding this year.
you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF
Hmm, why do you think this? I don't remember having said that.
the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I've talked to actually want the fund managers to spend the money that way (the EA Funds pitch is "defer to experts" and donors want to go all in on... (read more)
I think we will probably do two types of post-hoc evaluations:
#1 is somewhat high on my priority list (may happen later this year), whereas #2 is further down (probably won't happen this year... (read more)
high quality and convincing in whatever conclusions it has
Yeah, the latter is what I meant to say, thanks for clarifying.