Both posts contain a more detailed breakdown of inputs, but in short:
If you expect to take in $3-6M by the end of this year, borrowing say $300k against that already seems totally reasonable.
Not sure if this is possible, but I for one would be happy to donate to LTFF today in exchange for a 120% regrant to the Animal Welfare Fund in December[1]
This would seem to be an abuse of the Open Phil matching, but perhaps that chunk can be exempt
the comparison-in-practice I'm imagining is (say) $100k real dollars that we're aware of now vs $140k hypothetical dollars
That is very different from the question that Caleb was answering—I can totally understand your preference for real vs hypothetical dollars.
So these are all reasons that funding upfront is strictly better than in chunks, and I certainly agree. I'm just saying that as a donor, I would have a strong preference for funding 14 researchers in this suboptimal manner vs 10 of similar value paid upfront, and I'm surprised that LTFF doesn't agree.
Perhaps there are some cases where funding in chunks would be untenable, but that doesn't seem to be true for most on the list. Again, I'm not saying there is no cost to doing this, but if the space is really funding-constrained as you say 40% of value is an awful lot to give up. Is there not every chance that your next batch of applicants will be just as good, and money will again be tight?
A quick scan of the marginal grants list tells me that many (most?)[1] of these take the form of a salary or stipend over the course of 6-12 months. I don't understand how the time-value of money could be so out of whack in this case—surely you could grant say half of the requested amount, then do another round in three months once the large donors come around?[2]
...IDK 160% annualized sounds a bit implausible. Surely in that world someone would be acting differently (e.g. recurring donors would roll some budget forward or take out a loan)?
I would be curious to hear from someone on the recipient side who would genuinely prefer $10k in hand to $14k in three months' time.
Regarding the funding aspect:
Holden also stated in his recent 80k podcast episode that <50% of OP's grantmaking goes to longtermist areas.
IMHO seems possible to be rigorous with imaginary money, as some are with prediction markets or fantasy football. Particularly so if the exercise feels critical to the success of the platform.
I think the site looks great btw, just pushing back on this :)
I agree in the context of what I call deciding between different "established charities with fairly smooth marginal utility curves," which I think is more analogous to prediction markets or fantasy football or (for that matter) picking fake stocks.
But as someone who in the past has applied for funding for projects (though not on Manifund), if someone said, "hey we have 50k (or 500k) to allocate and we want to ask the following questions about your project," I'd be pretty willing to either reply to their emails or go on a call.
If on the other hand the...
People are not going to get the experience of making consequential decisions with $50, particularly if they're funding individuals and small projects (as opposed to established charities with fairly smooth marginal utility curves like AMF).
That said, I'm sympathetic to the same argument for $5k or 10k.
Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]...[3] Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
Looking at this table, I expect the non-FTX total is about the same[1]—I'd wager that there is more funding commited now than during the first ~70% of the second wave period.[2]
I think most people have yet to grasp the extent to which markets have bounced back:
I second these suggestions. To get more specific re cause areas:
Borrowing money if short timelines seems reasonable but, as others have said, I'm not at all convinced that betting on long-term interest rates is the right move. In part for this reason, I don't think we should read financial markets as asserting much at all about AI timelines. A couple of more specific points:
...Remember: if real interest rates are wrong, all financial assets are mispriced. If real interest rates “should” rise three percentage points or more, that is easily hundreds of billions of dollars worth of revaluations. It is unlikely that shar
I'm definitely not suggesting a 98% chance of zero, but I do expect the 98% rejected to fare much worse than the 2% accepted on average, yes. The data as well as your interpretation show steeply declining returns even within that top 2%.
I don't think I implied anything in particular about the qualification level of the average EA. I'm just noting that, given the skewedness of this data, there's an important difference between just clearing the YC bar and being representative of that central estimate.
A couple of nitpicky things, which I don't think change the bottom line, and have opposing sign in any case:
I worry that this presents the case for entrepreneurship as much stronger than it is[1]
So at best, if a founder is accepted into YC, and talented enough to have the same odds of success as a random prior YC founder, $4M/yr might be a reasonable...
Yeah I think we're on the same page, my point is just that it only takes a single digit multiple to swamp that consideration, and my model is that charities aren't usually that close. For example, GiveWell thinks its top charities are ~8x GiveDirectly, so taken at face value a match that displaces 1:1 from GiveDirectly would be 88% as good as a 'pure counterfactual'
There are still funds remaining, but it looks like each person can only set up three matched donations
A lot of this wouldn't show up in malaria, e.g. last year 39% of GiveWell funds directed went to malaria programs. But yeah, still would be interested to see data.
It seems like it would be particularly difficult to know ahead of time whether one is well-suited to founding a charity, and I can imagine that is a major barrier to application. Do you have any suggestions for assessment of fit?
Yes - the best way to figure out if you’re a good fit is to apply.
It's low cost and we've developed a pretty good understanding on who will do well. It's not reasonable to expect to know yourself, if you'd be a good fit for doing something that you've never done. So I'd suggest you submit an application and see how far you get.
I will add though, not getting through doesn't mean you're NOT a good fit, it just means we had some concerns or reservations given our particular approach. However if you do get in you can be confident you ARE a good fit...
Relevant excerpt from his prior 80k interview:
...Rob Wiblin: ...How have you ended up five or 10 times happier? It sounds like a large multiple.
Will MacAskill: One part of it is being still positive, but somewhat close to zero back then...There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or
Yeah it looked like grants had been announced roughly through June, so the methodology here was to divide by proportion dated Jan-Jun in prior years (0.49)
I'm not sure that inflation makes sense—this money isn't being spent on bread :) I think most of these funds would alternatively be invested, and returning above inflation on average.
FTX has so far granted 10x more to AI stuff than OPP
This is not true, sorry the Open Phil database labels are a bit misleading.
It appears that there is a nested structure to a couple of the Focus Areas, where e.g. 'Potential Risks from Advanced AI' is a subset of 'Longtermism', and when downloading the database only one tag is included. So for example, this one grant alone from March '22 was over $13M, with both tags applied, and shows up in the .csv as only 'Longtermism'. Edit: this is now flagged more prominently in the spreadsheet.
EA does seem a bit overrepresented (sort of acknowledged here).
Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to 'Student Values Survey'?
I strongly agree with this comment, especially the last bit.
In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically 'switching to direct work'.
I enjoyed this post and the novel framing, but I'm confused as to why you seem to want to lock in your current set of values—why is current you morally superior to future you?
Do I want my values changed to be more aligned with what’s good for the world? This is a hard philosophical question, but my tentative answer is: not inherently – only to the extent that it lets me do better according to my current values.
Speaking for myself personally, my values have changed quite a bit in the past ten years (by choice). Ten-years-ago-me would likely be doing somethi...
Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.
Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct ...
Offsetting the carbon cost of going from an all-chicken diet to an all-beef diet would cost $22 per year, or about 5 cents per beef-based meal. Since you would be saving 60 chickens, this is three chickens saved per dollar, or one chicken per thirty cents. A factory farmed chicken lives about thirty days, usually in extreme suffering. So if you value preventing one day of suffering by one chicken at one cent, this is a good deal.
I didn't read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.
Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.
In other words, this could alternatively be framed as 'backtesting ethics' or something, but 'future-proofing' speaks to (a) concern about repeating past mistakes (b) personal regret in future.
I was especially interested in a point/thread you mentioned about people perceiving many charities as having similar effectiveness and that this may be an impediment to people getting interested in effective altruism
See here
...A recent survey of Oxford students found that they believed the most effective global health charity was only ~1.5x better than the average — in line with what the average American thinks — while EAs and global health experts estimated the ratio is ~100x. This suggests that even among Oxford students, where a lot of outreach has b
Thanks for this—I have often wished I had a better elevator pitch for EA.
One thing I might add is some mention of just how wide the disparity can be amongst possible interventions, since this seems to be one of the most overlooked key ideas.
I believe both this post and Ben’s original ‘Funding Overhang’ post mentioned that this is an update towards a career with direct impact vs earning-to-give.
But earning-to-give is still very high impact in absolute terms.
Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.
I do think that more generally there is an inefficiency with so many EAs independently sinking time into investment management. I don't think that the answer is safe/passive/crowdsourcing, though.
Instead, I think what might be valuable is some sort of 'EA Mutual Funds'—a menu of investment profiles, each tied to a fund/manager. Possible value-add:
Is there any consideration for Investing-to-Give in the survey?
Perhaps it could be interesting to ask for both 'amount donated' and 'amount earmarked for donation'?
Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs.
I think the core argument here is that not enough EAs try to start a company, as opposed to try and are rejected by VCs. IMO the point of seeding would be to take more swings.
Also, presumably the bar should be lower for an EA VC, because much of the founders' stake will also go to effective charity.
Thanks! Small correction: Animal Welfare YTD is labeled as $53M, when it looks like the underlying data point is $17M (source and 2023 full-year projections here)