All of TylerMaule's Comments + Replies

Thanks! Small correction: Animal Welfare YTD is labeled as $53M, when it looks like the underlying data point is $17M (source and 2023 full-year projections here)

Both posts contain a more detailed breakdown of inputs, but in short:

  1. 80k seems to include every entry in the Open Phil grants database, whereas my sheet filters out items such as criminal justice reform that don't map to the type of funding I'm attempting to track.
  2. They also add a couple of 'best guess' terms to estimate unknown/undocumented funding sources; I do not.

If you expect to take in $3-6M by the end of this year, borrowing say $300k against that already seems totally reasonable.

Not sure if this is possible, but I for one would be happy to donate to LTFF today in exchange for a 120% regrant to the Animal Welfare Fund in December[1]

  1. ^

    This would seem to be an abuse of the Open Phil matching, but perhaps that chunk can be exempt

6
Linch
8mo
Thanks so much for your offer! That'd be a great option to have on the table! Hopefully enough donors will ameliorate our gaps in the next month, but I might check in with you against later this month if a) we have some more firm commitments for donations by end-of-year[1], and b) we're still quite severely funding constrained as of Sept 20th, and c) we can't find lower bids. 1. ^ One issue with the straightforward "expect to get $3-6M by the end of this year" logic is that a model that spits out that sentence would also predict that this fundraising post + associated public and private comms should also work as fundraising for us; if we turn out to receive neither donations in the near future nor promises after our posts now, I should also strongly update against my original estimate of getting 3-6M by EOY.

the comparison-in-practice I'm imagining is (say) $100k real dollars that we're aware of now vs $140k hypothetical dollars

That is very different from the question that Caleb was answering—I can totally understand your preference for real vs hypothetical dollars.

So these are all reasons that funding upfront is strictly better than in chunks, and I certainly agree. I'm just saying that as a donor, I would have a strong preference for funding 14 researchers in this suboptimal manner vs 10 of similar value paid upfront, and I'm surprised that LTFF doesn't agree.

Perhaps there are some cases where funding in chunks would be untenable, but that doesn't seem to be true for most on the list. Again, I'm not saying there is no cost to doing this, but if the space is really funding-constrained as you say 40% of value is an awful lot to give up. Is there not every chance that your next batch of applicants will be just as good, and money will again be tight?

2
Linch
8mo
To be clear, I'm not sure I agree with the numbers Caleb gave, and I think they're somewhat less likely given that we live in a world where we communicated our funding needs. But I also want to emphasize that the comparison-in-practice I'm imagining is (say) $100k real dollars that we're aware of now vs $140k hypothetical dollars that a donor is thinking of giving to us later but not actually communicating to us; which means from our perspective we're planning as if that money isn't real. If people are actually faced with that choice I encourage actually communicating that to us; if nothing else we can probably borrow against that promise and/or make plans as if that money is real (at some discount). There's some chance, sure, but it's not every chance. Or at least that's my assumption. If I think averaging 100k/month (or less) is more likely than not to become the "new normal" of LTFF, I think we need to seriously think about scaling down our operations or shutting down.  I don't think this is very likely given my current understanding of donor preferences and the marginal value of LTFF grants vs other longtermist donation opportunities[1], but of course it's possible.  I think there is a chicken-and-egg problem with the fund right now, where to do great we need a) great applications b) grant grantmakers/staff/organizational capacity (especially a fund chair) and c) money Hiring for good grantmakers has never been easy, but I expect it to to be much harder to find a fund chair to replace Asya if we can't promise them with moderately high probability that we are moving enough money to be worth their time working on the fund, compared to other very high value work that they could be doing (and more prosaically, many potential hires like having a guaranteed salary). I also expect great applications to start drying up eventually if there continues to be so much funding uncertainty, though there's still some goodwill we have to burn down and I think problems like t

A quick scan of the marginal grants list tells me that many (most?)[1] of these take the form of a salary or stipend over the course of 6-12 months. I don't understand how the time-value of money could be so out of whack in this case—surely you could grant say half of the requested amount, then do another round in three months once the large donors come around?[2]

  1. ^

    As for the rest, I don't see anything on the list that wouldn't exist in three months.

  2. ^

    Daniel's comment says "there are a whole host of issues" with this approach. I'd be curious to know wha

... (read more)
-2
Linch
8mo
GPT-4 gave some reasons here.  In addition:  * Being an independent researcher on a 12-month grant is already quite rough, moving to a 3-month system is a pretty big ask and I expect us to lose some people to academia or corporate counterfactuals as a result * Most of the people we're funding have fairly valuable counterfactuals (especially monetarily); if we fund them with 3 months under high researcher uncertainty and potential for discontinuity, I just expect many of our grantees to spend a large fraction of the time job-searching.  * For people who are not independent, a 3-months contract makes it very hard to navigate other explicit and implicit commitments (eg project leads will find it hard to find contractors/employees, I'm not sure it's even possible to fund a graduate student for a fraction of a semester) * Giving us $X now is guaranteed, and we can make grants or plan around them. Maybe giving us $1.4X in the future is more of a hypothetical, and not something that we can by default plan around. * If a large donor is actually in this position, please talk to us so we can either discuss options together and/or secure an explicit commitment that is easier for us to work around.

IDK 160% annualized sounds a bit implausible. Surely in that world someone would be acting differently (e.g. recurring donors would roll some budget forward or take out a loan)?

I would be curious to hear from someone on the recipient side who would genuinely prefer $10k in hand to $14k in three months' time.

3
Daniel_Eth
8mo
Presumably the first step towards someone acting differently would be the LTFF/EAIF (perhaps somewhat desperately) alerting potential donors about the situation, which is exactly what's happening now, with this post and a few others that have recently been posted. FWIW, (with rare exceptions) it's not that more funding would allow us to give the same recipients larger grants, but instead that more funding would allow us to fund more grants, and marginal grants now are (according to Caleb's math) ~40% more valuable per dollar than what he expects from the marginal grant in a few months. In principle, grantees could be given the promise of (larger) delayed payment for grants instead of payment up front, but I think there are a whole host of problems with heading down that path.
7
calebp
8mo
Maybe it's a bit high but it doesn't seem crazy to me. We seem to have a lot of unusually good applications right now and unusually little funding. I also expect to hear back from some large donors later in the year and I expect our donations to increase around giving season (December).

Regarding the funding aspect:

  • As far as I can tell, Open Phil has always given the majority of their budget to non-longtermist focus areas.
    • This is also true of the EA portfolio more broadly.
  • GiveWell has made grants to less established orgs for several years, and that amount has increased dramatically of late.
2
Arepo
9mo
I realise I didn't make this distinction, so I'm shifting the goalposts slightly, but I think it's worth distinguishing between 'direct work' organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.

Holden also stated in his recent 80k podcast episode that <50% of OP's grantmaking goes to longtermist areas.

IMHO seems possible to be rigorous with imaginary money, as some are with prediction markets or fantasy football. Particularly so if the exercise feels critical to the success of the platform.

I think the site looks great btw, just pushing back on this :)

I agree in the context of what I call deciding between different "established charities with fairly smooth marginal utility curves," which I think is more analogous to prediction markets or fantasy football or (for that matter) picking fake stocks.

But as someone who in the past has applied for funding for projects (though not on Manifund), if someone said, "hey we have 50k (or 500k) to allocate and we want to ask the following questions about your project," I'd be pretty willing to either reply to their emails or go on a call. 

If on the other hand the... (read more)

Could you not dogfood just as easily with $50 (or fake money in a dev account)?

People are not going to get the experience of making consequential decisions with $50, particularly if they're funding individuals and small projects (as opposed to established charities with fairly smooth marginal utility curves like AMF).

That said, I'm sympathetic to the same argument for $5k or 10k.

You may find this spreadsheet useful for that type of information

1
LukeDing
10mo
This looks useful, thanks!

Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]...[3] Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding

Looking at this table, I expect the non-FTX total is about the same[1]—I'd wager that there is more funding commited now than during the first ~70% of the second wave period.[2]

I think most people have yet to grasp the extent to which markets have bounced back:

  • The S&P 500 Total Return Index is within
... (read more)
0
Ben_West
10mo
Thanks! This is helpful.

I second these suggestions. To get more specific re cause areas:

  • Each source uses a different naming convention (and some sources are just blank)
  • I'd suggest renaming that column 'labels' and instead mapping to just a few broadly defined buckets which add up to 100%—I've already done much of that mapping here

Borrowing money if short timelines seems reasonable but, as others have said, I'm not at all convinced that betting on long-term interest rates is the right move. In part for this reason, I don't think we should read financial markets as asserting much at all about AI timelines. A couple of more specific points:

Remember: if real interest rates are wrong, all financial assets are mispriced. If real interest rates “should” rise three percentage points or more, that is easily hundreds of billions of dollars worth of revaluations. It is unlikely that shar

... (read more)

I'm definitely not suggesting a 98% chance of zero, but I do expect the 98% rejected to fare much worse than the 2% accepted on average, yes. The data as well as your interpretation show steeply declining returns even within that top 2%.

I don't think I implied anything in particular about the qualification level of the average EA. I'm just noting that, given the skewedness of this data, there's an important difference between just clearing the YC bar and being representative of that central estimate.

A couple of nitpicky things, which I don't think change the bottom line, and have opposing sign in any case:

  1. In most cases, quite a bit of work has gone in prior to starting the YC program (perhaps about a year on average?) This might reduce the yearly value by 10-20%
  2. I think the 12% SP500 return cited is the arithmetic average of yearly returns. The geometric average, i.e. the realized rate of return should be more like 10.4%
2
Jason
1y
#1 seems like a bigger deal if the optimal strategy is to do some startup work, then discontinue if you're not in the top 2 percent as evaluated by YC (because that assessment heavily updates your EV). Presumably there is some cost there -- at a minimum, the discontinuers could have been earning-to-give at a higher-paying job during that time. So I think the analysis could critically hinge on how accurately one can gauge their odds of being in the top 2 percent in a low-cost manner.

I worry that this presents the case for entrepreneurship as much stronger than it is[1]

  1. The sample here is companies that went through Y-Combinator, which has a 2% acceptance rate[2]
  2. As stated in the post, roughly all of the value comes from the top 8% of these companies
  3. To take it one step further, 25% of the total valuation comes from the top 0.1%, i.e. the top 5 companies (incl. Stripe & Instacart)

So at best, if a founder is accepted into YC, and talented enough to have the same odds of success as a random prior YC founder, $4M/yr might be a reasonable... (read more)

3
Ben_West
1y
Thanks! I don't want to put words in your mouth, but I think you might be modeling this as something like "2.5% chance of $4M, 97.5% chance of zero, therefore  all numbers should be multiplied by 0.025", and that's not correct. E.g. I was rejected from YCombinator, but still had returns roughly similar to what's estimated here. I think you might also be implying that the average EA is less qualified than the average YCombinator participant, even conditional on them being accepted to YCombinator. I have less data here, but of the two EA-ish companies I know that went through YCombinator, one had a ~$0 exit, and the other $500 million. At least within this (admittedly tiny) data set, the returns look pretty good.[1] 1. ^ You list Stripe's founders as being exceptional, which they surely are, but I could imagine Patrick explicitly earning to give if he had been born 10 years later.
9
TylerMaule
1y
A couple of nitpicky things, which I don't think change the bottom line, and have opposing sign in any case: 1. In most cases, quite a bit of work has gone in prior to starting the YC program (perhaps about a year on average?) This might reduce the yearly value by 10-20% 2. I think the 12% SP500 return cited is the arithmetic average of yearly returns. The geometric average, i.e. the realized rate of return should be more like 10.4%

Yeah I think we're on the same page, my point is just that it only takes a single digit multiple to swamp that consideration, and my model is that charities aren't usually that close. For example, GiveWell thinks its top charities are ~8x GiveDirectly, so taken at face value a match that displaces 1:1 from GiveDirectly would be 88% as good as a 'pure counterfactual'

  1. Most matches are of the free-for-all variety, meaning the funds will definitely go to some charity, just a question of who gets there first (e.g. Facebook & Every.org). While this might sound like a significant qualifier, it's almost as good as a pure counterfactual unless you believe that all nonprofits are ~equally effective.
    1. The 'worst case' is a matching pool restricted to one specific org, where presumably the funds will go there regardless, and doesn't really add anything to your donation.
  2. Conversely, as Lizka noted, even the best counterfactual on
... (read more)
2
david_reinstein
1y
Mostly agree but I think this overstates it a bit: It would only be 'almost as good as a pure counterfactual' if you think the charity the other people (other than you) would be choosing is likely to be far less effective than the one you are choosing. My rough belief is that this is usually the case when the other people exploiting this match would be donating to a 'wealthy country' charity (e.g., US poverty), to a 'pets' charity (cat rescue etc), or (especially) to a 'luxury/cultural' charity (like a university, opera house, etc.) If this is in a context in which the other people are likely to donate to a mainstream global health and development charity like UNICEF, this case is less clear. We really don't have good metrics to judge the effectiveness of charities like that. For mainstream research charities (cancer research, etc.), I'm even less sure, but I would lean towards 'these charities are probably far less effective than a GiveWell/ACE/etc charity'

There are still funds remaining, but it looks like each person can only set up three matched donations

A lot of this wouldn't show up in malaria, e.g. last year 39% of GiveWell funds directed went to malaria programs. But yeah, still would be interested to see data.

Naively, $1.6B/$5k ~330k deaths averted[1]? Adjust down because some spending is less effective than AMF. Adjust up because of AMF cost/life inflation.

  1. ^

    (Or equivalent)

2
Linch
2y
Is there solid on-the-ground evidence of this effect? annual worldwide malaria burden is on the order of 500k/year, at least in theory 330k total (reframed, 33k/year spread out across 10 years) should maybe be large enough to show up in the summary statistics if you do diff-in-diff studies etc.

It seems like it would be particularly difficult to know ahead of time whether one is well-suited to founding a charity, and I can imagine that is a major barrier to application. Do you have any suggestions for assessment of fit?

Yes - the best way to figure out if you’re a good fit is to apply. 

It's low cost and we've developed a pretty good understanding on who will do well. It's not reasonable to expect to know yourself, if you'd be a good fit for doing something that you've never done. So I'd suggest you submit an application and see how far you get. 

I will add though, not getting through doesn't mean you're NOT a good fit, it just means we had some concerns or reservations given our particular approach. However if you do get in you can be confident you ARE a good fit... (read more)

The biggest factor is the arrival of FTX, which has given more to infrastructure YTD than all others combined the prior two years

1
Miguel Lima Medín
2y
Thanks for your response Tyler! Shouldn't these FTX donations be included under "Longtermism and Catastrophic Risk Prevention" instead of under "EA infrastructure"? Maybe I'm missinterpreting the Cause Areas.

Relevant excerpt from his prior 80k interview:

Rob Wiblin: ...How have you ended up five or 10 times happier? It sounds like a large multiple.

Will MacAskill: One part of it is being still positive, but somewhat close to zero back then...There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or

... (read more)
8
Andrew Gimber
2y
Aside from starting from a low baseline and adopting good mental health habits, I'd be interested to know how much of the 5–10x happiness multiplier Will would attribute to his professional success and the growth of the EA movement. Is that stuff all counteracted by the hedonic treadmill?

Yes, sorry, on reflection that seems totally reasonable

Yeah it looked like grants had been announced roughly through June, so the methodology here was to divide by proportion dated Jan-Jun in prior years (0.49)

I'm not sure that inflation makes sense—this money isn't being spent on bread :) I think most of these funds would alternatively be invested, and returning above inflation on average.

[This comment is no longer endorsed by its author]Reply
3
Vasco Grilo
2y
From Investopedia, "inflation is the rate at which prices for goods and services rise". So my understanding is that it is a broad measure of the purshasing power of money, and matters even if the money is not (directly) going towards buying food.

2012-Pres. (first longtermist grant was in 2015) no projection

2
Peter Wildeford
2y
Thanks! This is for 2014-2022? If so, does it include 2022 projection?

FTX has so far granted 10x more to AI stuff than OPP

This is not true, sorry the Open Phil database labels are a bit misleading. 

It appears that there is a nested structure to a couple of the Focus Areas, where e.g. 'Potential Risks from Advanced AI' is a subset of 'Longtermism', and when downloading the database only one tag is included. So for example, this one grant alone from March '22 was over $13M, with both tags applied, and shows up in the .csv as only 'Longtermism'. Edit: this is now flagged more prominently in the spreadsheet.

Many of the sources used here can't be automated, but the spreadsheet is simple to update

2
Adam Binks
2y
A data-point on this - today I was looking for and couldn't find this graph. I found effectivealtruismdata.com but sadly it didn't have these graphs on it. So would be cool to have it on there, or at least link to this post from there!
3
david_reinstein
2y
Fair point but still may be worth joining force or coordinating with Hamish
3
Peter Wildeford
2y
Thanks!!

EA does seem a bit overrepresented (sort of acknowledged here).

Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to 'Student Values Survey'?

I strongly agree with this comment, especially the last bit.

In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically 'switching to direct work'.

2
lexande
2y
Maybe, though given the unilateralist's curse and other issues of the sort discussed by 80k here I think it might not be good for many people currently on the fence about whether to found EA orgs/megaprojects to do so. There might be a shortage of "good" orgs but that's not necessarily a problem you can solve by throwing founders at it. It also often seems to me that orgs with the right focus already exist (and founding additional ones with the same focus would just duplicate effort) but are unable to scale up well, and so I suspect "management capacity" is a significant bottleneck for EA. But scaling up organizations is a fundamentally hard problem, and it's entirely normal for companies doing so to see huge decreases in efficiency (which if they're lucky are compensated for by economies of scale elsewhere).

Re footnote, the only public estimate I've seen is $400k-$4M here, so you're in the same ballpark.

Personally I think $3M/y is too high, though I too would like to see more opinions and discussion on this topic.

3
Jeff Kaufman
2y
Thanks! I had missed that part of the article when skimming it again in writing this.  Note that a bit earlier, in discussing the highest priority roles, they give "typically" over $3M and "often" over $10M.

I enjoyed this post and the novel framing, but I'm confused as to why you seem to want to lock in your current set of values—why is current you morally superior to future you?

Do I want my values changed to be more aligned with what’s good for the world? This is a hard philosophical question, but my tentative answer is: not inherently – only to the extent that it lets me do better according to my current values.

Speaking for myself personally, my values have changed quite a bit in the past ten years (by choice). Ten-years-ago-me would likely be doing somethi... (read more)

Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.

Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct ... (read more)

See also

Offsetting the carbon cost of going from an all-chicken diet to an all-beef diet would cost $22 per year, or about 5 cents per beef-based meal. Since you would be saving 60 chickens, this is three chickens saved per dollar, or one chicken per thirty cents. A factory farmed chicken lives about thirty days, usually in extreme suffering. So if you value preventing one day of suffering by one chicken at one cent, this is a good deal.

I didn't read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.

Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.

In other words, this could alternatively be framed as 'backtesting ethics' or something, but 'future-proofing' speaks to (a) concern about repeating past mistakes (b) personal regret in future.

3
Holden Karnofsky
2y
I think I agree with Tyler. Also see this follow-up piece - "future-proof" is supposed to mean "would still look good if we made progress, whatever that is." This is largely supposed to be a somewhat moral-realism-agnostic operationalization of what it means for object-level arguments to be right.

I was especially interested in a point/thread you mentioned about people perceiving many charities as having similar effectiveness and that this may be an impediment to people getting interested in effective altruism

 

See here

A recent survey of Oxford students found that they believed the most effective global health charity was only ~1.5x better than the average — in line with what the average American thinks — while EAs and global health experts estimated the ratio is ~100x. This suggests that even among Oxford students, where a lot of outreach has b

... (read more)
  1. As Jackson points out, those willing to go the 'high uncertainty/high upside' route tend to favor far future or animal welfare causes. Even if we think these folks should consider more medium-term causes, comparing cost-effectiveness to GiveWell top charities may be inapposite.
  2. It seems like there is support for hits-based policy interventions in general, and Open Phil has funded at least some of this.
  3. The case for growth was based on historical success of pro-growth policy. Not only is this now less neglected, but much of the low-hanging fruit has been take
... (read more)

Thanks for this—I have often wished I had a better elevator pitch for EA.

One thing I might add is some mention of just how wide the disparity can be amongst possible interventions, since this seems to be one of the most overlooked key ideas.

I believe both this post and Ben’s original ‘Funding Overhang’ post mentioned that this is an update towards a career with direct impact vs earning-to-give.

But earning-to-give is still very high impact in absolute terms.

Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.

Thanks for writing; I too have worried that many folks got the wrong impression here.

I do think that more generally there is an inefficiency with so many EAs independently sinking time into investment management. I don't think that the answer is safe/passive/crowdsourcing, though.

Instead, I think what might be valuable is some sort of 'EA Mutual Funds'—a menu of investment profiles, each tied to a fund/manager. Possible value-add:

  1. Consolidation of labor to fund manager (research, tax planning)
  2. Access to leverage/accreditation
  3. Save fees vs using a DAF

Anyone know where the $250k is coming from? This is all I could find:

Matching funds are provided by generous donors who contribute to help amplify grassroots giving

6
MichaelStJules
2y
I'd guess the Gates Foundation, Camp.org or associated funders, since those are listed as supporting funders. https://www.every.org/about-us

Is there any consideration for Investing-to-Give in the survey?

  1. Is contributing to a DAF meant to count as a 'donation'? I would think yes, though not all I2G is done via DAF
  2. According to this post, many in the community think deploying <5% of available capital/yr is currently optimal

Perhaps it could be interesting to ask for both 'amount donated' and 'amount earmarked for donation'?

4
david_reinstein
2y
Thanks for raising this. In response to your comment, we revisit this in the appendix hosted HERE (give it a moment to auto-jump to the relevant section). This includes a discussion, some figures, and a bar chart. We hope to explore this more in future work. Some key points: In 2020 we asked * 17.9% of responses report saving to donate later. (This represents 24.6% of those who answer this question). * Among those who report saving to donate, median donations are 5000 USD and mean donations are 134,267 USD We present a histogram of donation in the linked section

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs.

 

I think the core argument here is that not enough EAs try to start a company, as opposed to try and are rejected by VCs. IMO the point of seeding would be to take more swings.

Also, presumably the bar should be lower for an EA VC, because much of the founders' stake will also go to effective charity.

Load more