All of Charles Dillon 's Comments + Replies

Well done for doing this! I think attempted replications or re-examinations of existing work are under-done in EA and wish more were conducted.

Can you give an example of a point or points in there you found compelling?

That article looks like the usual "utilitarianism is bad" stuff (an argument which predates EA by a long time and has seen little progress in recent times) combined with some strong mood affiliation and straightforward misunderstandings of economic thinking to me.

I've edited it slightly to work on this, though it is not easy to make this point without appearing slightly callous, I think.

What was this distinct reason? If this was mentioned in the post, I didn't see it.

If it wasn't mentioned in the post, it feels disingenuous of you to not mention it and give the impression that you were left in the dark and had to come up with your own list of hypotheses. It's quite difficult for a third party to come to any conclusions without this piece of information.

This comment feels unnecessarily combative, even though I agree with the practical point that without this piece of information, 3rd party observers can’t really get an accurate picture of the situation. So I agreed with but downvoted the comment.

I'm a bit confused by all the drive by downvotes of someone sharing a quickly sketched out plausible-sounding idea.

I think we'd be better off of we encouraged this sort of thing rather than discouraged it, at least until there actually seems to be a problem with too many half baked novel ideas being posted - if people disagree I'd like to know why.

Very few. The 2020 EA survey had only 6 people earning 1m+, which doesn't necessarily equate to donating as much. It's unlikely, I think, that fewer than 5% of such people took the survey given it had 2,000 responses and I doubt think there are more than 10,000 committed EAs, so I think there are likely under 100 such people.

I expect much of the data is out there, because the majority of billionaires either want to give publicly, or they need to disclose when they change their shareholdings in their main source of wealth (in the case of the typical company founder) due to regulations, and donating to charity is seen as a good excuse to do this.

It may be rather difficult to gather though, as I don't expect there to be a nice centralised source.

1Rob Percival1mo
I guess the harder the data is to gather the more valuable the resource would be! If it is actually something people are interested in that is...

62 pages is quite long - I understand then why you wouldn't put it on the forum.

I really dislike reading PDFs, as I read most non work things on mobile, and on Chrome based web browsers they don't open in browser tabs, which is where I store everything else I want to read.

I think I'd prefer some web based presentation, ideally with something like one web page per chapter/ large section. I don't know if this is representative of others though.

7James Ozden1mo
I've made a Google Doc version [] if this is better - thanks for the feedback, it's been very useful!

I'm glad you produced this. One thing I found annoying, though, was that you said:

"The evidence related to each outcome, and how we arrived at these values, are explained further in the respective sections below."

But, they weren't? The report was just partially summarised here, with a link to your website. Why did you choose to do this?

3James Ozden1mo
Ah yes thanks for pointing that out! That sentence is there because we literally copied it from the report executive summary (so in that case it is below). It's a fair point though so will change that to reflect it not actually being below. I chose to not put the whole report [] on the forum just because it's so long (62 pages) and I was worried it would (a) put people off reading anything at all, (b) take me longer than what was reasonable to get all the formatting right and (c) provide a worse reading experience given it was all formatted for PDF/Google Docs (and that I wasn't going to spend loads of time formatting it perfectly). Curious though - would you prefer to read / find it easier if it was all on the Forum, or what would be your preferred reading location for longer reports?

Do you have a back of the envelope calculation for the expected impact of e.g. a marginal USD 10,000?

6Silvano Lieger1mo
Thank you for your question. I checked with our media planning team and an additional USD 10,000 could result in any of the following: 1) Reaching an additional 1.9 million gross contacts through PassengerTV (videos shown on Swiss public transport, mainly in buses). Those would be 10 second clips, shown every 8 minutes for 21 days. The screens look like this []. 2) An additional week of digital presence on so-called Rail eBoards at main train stations in the five largest cities of the country. This donation would likely allow us to book slots on 8-10 screens in total. In Switzerland, train stations are usually very busy and we are guaranteed to be visible with these large-format advertisements. The screens look like this [] and we've already been using them since Monday []. If you need additional information, please let me know.

"""Poor nations would suffer unintended consequences because they rely heavily on exports to rich countries → In fact, poor nations will benefit from it because as we internalize costs, they’ll get a fair salary and get compensated for environmental costs."""

This is, to put it mildly, implausible, and requires strong evidence IMO.

That you dismissed the most important issue with your claim so tersely without really engaging with it suggests you simply do not care very much about the effects on the global poor, which in a scenario without economic growth wou... (read more)

You may be right. I admit I lack the knowledge to answer that and I also see some potential problems for the global poor (in the post itself I already mention unemployment), about which ofc I care but I wonder whether they would be easily solvable or if they could be so big that it makes degrowth in rich countries unethical.

Hi Sofia. I agree that orgs should try to avoid relying on volunteer labor if they can, for the reasons you outline. I don't agree with your explanation for why the status quo is what it is.

I don't agree that "EA community's high use of volunteer labor shows that a lot of EAs don't relate to the average person in the world who is a couple of paychecks away from being homeless" first of all because I'm not clear on how high that use is, and secondly because the orgs who happen to be using volunteer labor may just be financially constrained. Just because the... (read more)

"The EA community has little awareness of their privilege."

This strikes me as straightforwardly untrue, unless you are holding the community to a standard which nobody anywhere meets. The EA community exists largely because individuals recognised their outsized (i.e. privileged) position to do good in the world, given their relative access to resources compared to e.g. those in poverty and non-human animals, and strove to use that privilege for good.

That EA doesn't, e.g. make it as easy for you to go to EA conferences as it is for a Western citizen is not ... (read more)

Hey Charles, thanks for reading and for your comment - I appreciate it. I totally agree with you that EA is based on people realising their privilege. This is why I'm in EA. Perhaps I should have been clear when I argued this. This conclusion comes from the previous two premises: P1: It’s hard to get EA jobs, most EA jobs accept unpaid volunteers and some don’t pay a living wage. P2: It’s hard to get involved in the movement generally due to locations of EA Hubs and conferences mostly being in the Western countries. (added) and P3: Multiple EA organisations and individuals offer low wages, hire without a diverse pool of candidates, and don’t pay for trial tasks. P4: In the world, the majority of people cannot afford to work for free and their job is their main source of income. To elaborate even further, EA community's high use of volunteer labor shows that a lot of EAs don't relate to the average person in the world who is a couple of paychecks away from being homeless (doesn't have a large secuty net). Because EA was founded by people in Western countries, most people can't relate to what it's like not to be based in these countries - not being able to participate fully in these events. For example, most people in EA that I spoke to about me not being able to get a visa were surprised that this is even an issue and many people who organise EA-related events have made plans to make them more accessible to people from more countries. I also agree with opportunity cost argument and it's worth calculating. That's why I propose more research to be done on the value of proving equal opportunity to all community members.

There seems to me to be a fallacy here that assumes every action SBF takes needs to be justifiable on its first order EA merits.

The various stakes FTX have taken in crypto companies during this downturn are obviously not done in lieu of donations - they are business decisions, presumably done with the intention of making more money, as part of the process of making FTX a success. Whether they are good decisions in this light is hard for me to say, but I'd be inclined to defer to FTX here.

I was thinking through such a possibility descriptively, and how the EA community might respond, without trying to prescribe the EA community in a real-world scenario. I didn't indicate that well, though, so please pardon me for the error. To clarify, given the assumptions that criticisms of SBF's or FTX's investments or donations might be used to attack EA as a movement by association, and the EA community also had some responsibility to distance itself from those efforts, it wouldn't be that hard to do so. I personally disagree with the second assumption. I'm of the opinion the EA community has no such responsibility but it seems at least some others do. SBF seems to have made some mistakes with his recent forays into politics but they don't strike me to have been as bad as at least a significant minority of the EA community believes. My opinion is that the need some felt for the EA community to distance itself from SBF's political activities was excessive. I agree with all of this. There are plenty of companies that have taken long(er)-term bets like the one FTX is making that have turned out to be among the best business decisions of the 21st century. Facebook, Amazon and companies Elon Musk has bought were not profitable for almost a decade. They were marred by criticisms and predictions of how they were always on the brink of imminent collapse. That was all bogus. It's worth keeping survivorship bias in mind and the fact that some bets made like this wound up as catastrophic business decisions. Yet it's not justified to assume by default FTX's investments in this way will end up as bad rather than good decisions. That's especially true in the absence of more information. The author hasn't provided any such information and is not likely to have access to such information either. It seems like more pandering. I'm guessing the author is the kind who would've maligned Musk when he was a Democrat but now because Musk is a Republican defend decisions he might

No, I don't want to bet at this point - I'm not interested in betting such a small amount, and don't want to take the credit risk inherent in betting a larger amount given the limited evidence I've got about your reliability.

1Dwarkesh Patel3mo
1Dwarkesh Patel3mo
Really sorry man, unfortunately I forgot about it. I'm happy to accept that bet in public. How do you propose we make it official? Let's do $10 to $40 dollars?

I am skeptical of attempts to gatekeep here. E.g. I found Scoblic's response to Samotsvety's forecast less persuasive than their post, and I am concerned here that "amateurish" might just be being used as a scold because the numbers someone came up with are too low for someone else's liking, or they don't like putting numbers on things at all and feel it gives a false sense of precision.

That isn't to say this is the only criticism that has been made, but just to highlight one I found unpersuasive.

I am not an expert, but personally I see the current crop of nuke experts as primarily "evangelizers of the wisdom of the past". The nuke experts of the past, such as Tom Schelling, are more impressive (and more mathematical). If a better approach to nuke risk was easy to find, it would have probably already been found by one of the many geniuses of the 20th century who looked at nuke risk. If so, the best place to make a marginal contribution to nuke risk is by evangelizing the wisdom of the past: this can help avoid backsliding on things like arms control treaties (this also raises the question of the tractability of a geopolitical approach to reducing risk versus preparation/adaptation to nuclear war's environmental damage and versus other non-nuke cause areas).

That seems like quite the bold prediction, depending on the operationalization of "new" and "effective altruist".

I would give you 4-1 odds on this if we took "new" to mean folks not currently giving at scale using an EA framework and not deriving their wealth from FTX/Alameda or Dustin Moskovitz, and require the donors to be (i) billionaires per Bloomberg/Forbes and (ii) giving >50m each to Effective Altruist aligned causes in the year 2027.

1Dwarkesh Patel3mo
I would be happy to take it at those odds! I'll DM you later about the bet!

I think the thesis is plausible here, but it would be more credible and easier to discuss and act upon if you gave more precise predictions or confidence intervals (e.g. "I think with X% confidence there will be Y billionaires with an aggregate net worth of >Z, excluding Dustin Moskovitz and the FTX/ Alameda crew, in EA by 2027").

I made a bet with a fellow blogger!

$250, even odds: 10 new EA billionaires in 5 years

Also, I made a manifold market on this:

And maybe even more if you open Metaculus questions on those events.

Using the code I linked above, it should require only minor changes if the Metaculus prediction is in one of the time series in the data, which I guess it is? Probably for someone with good familiarity with the API it would be a matter of an hour or two, otherwise it might take a bit longer.

I unfortunately will not have time to do this anytime soon.

"Perfectly calibrated", not "perfect". So if all of their predictions were correct, I.e. 20% of their 20% predictions came true etc.

So in this case, someone making all 90% predictions will have an expected score of 0.9×0.1^2 + 0.1×0.9^2 =0.09, while someone making all 80% predictions will have an expected score of 0.8×0.2^2 + 0.2×0.8^2=0.16

In general a lower expected score means your typical prediction was more confident.

One thing to note here is it is plausible that your errors are not symmetric in expectation, if there's some bias towards phrasing questions one way or another (this could be something like frequently asking "will [event] happen" where optimism might cause you to be too high in general, for example). This might mean assuming linearity could be wrong.

This is probably easier for you to tell since you can see the underlying data.

I'm using overconfident here to mean closer to extreme confidence (0 or 100, depending on whether they are below or above 50%, respectively) than they should be.

Minor point, but I disagree with the unqualified claim of being well calibrated here except for the 90% bucket, at least a little.

Weak evidence that you are overconfident in each of the 0-10, 10-20, 70-80, 80-90 and 90%+ buckets is decent evidence of an overconfidence bias overall, even if those errors are mostly individually within the margin of error.

3James Ozden4mo
I'm probably missing something but doesn't the graph show OP is under-confident in the 0-10 and 10-20 bins? e.g. those data points are above the dotted grey line of perfect calibration where the 90%+ bin is far below?

Very good point!

I see a few ways of assessing "global overconfidence":

  1. Lump all predictions into two bins (under and over 50%) and check that the lower point is above the diagonal and the upper one is below the diagonal. I just did this and the points are where you'd expect if we were overconfident, but the 90% credible intervals still overlap with the diagonal, so pooling all the bins in this way still provides weak evidence of overconfidence.
  2. Calculating the OC score as defined by Metaculus (scroll down to the bottom of the page and click the (+) sign next
... (read more)

It's hard to imagine him not being primarily seen as a crypto guy while he's regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.

I don't think the actual dollar number he spends is that important here. Media coverage can be very scope insensitive, so it isn't obvious to me that $100m would be meaningfully different to $50m or $25m here.

I agree more legible altruistic acts would be good for PR, and contra Stefan I do think there's a case for focusing on this to an extent, but that doesn't mean just picking a big number out of a hat and spending it.

Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”

"generally really shitty salaries for researchers in the UK" as a downside for Oxbridge - this seems like something any org hiring researchers can unilaterally fix, at least for their researchers?

I think this is true for EA orgs but a) Some people want to contribute within the academic system b) Even EA orgs can be constrained by weird academic legal constraints. I think FHI is currently facing some problems along these lines (low confidence, better ask them).

I think it would not have been difficult for you to do a back of the envelope calculation for how many net makers would be out of business for each amount of nets distributed (a net maker can make X nets, coverage was Y% before AMF arrived). The lack of even a bare bones quantitative case reinforces my prior that this is very unlikely to be a significant issue.

Agree a simple calculation as outlined wouldn't be hard. That would effectively increase the cost-per-life-saved by 20%, say, which is noteworthy but not fundamentally changing things. The real risk is the longer-term, hard-to-measure impacts which may hold back economic progress generally. These are by definition hard to fit in to a cost-per-life saved calculation but that doesn't mean the impacts don't exist. Knowing these risks exist and intervening anyway is a choice some donors will be comfortable with but others will not.

To break it down into proportions:

About ⅖ of the charities each year exceed the cost-effectiveness of the strongest charities in their fields and have been supported by multiple independent funding bodies (Open Philanthropy, GiveWell, EA funds, etc.).About ⅖ make progress, but remain small-scale or have an unclear cost-effectiveness.About ⅕ shut down in their first 24 months without having a significant impact.

Minor note but these fractions aren't rendering correctly for me on mobile (they're showing up as a little black X), so I would suggest replacing them with percentages or something.

Thanks for the responses, it's been very helpful! I still do not agree that this is a productive step but I feel I have a better understanding of your approach than I did.

That deals with the venue problem, but not with the group dynamics one. If my social group is eating together, I do not want to be the one insisting that my presence requires everyone else to eat only vegan options. It's just going to annoy people and make them think I'm difficult to be around.

This is different to meeting one friend for food or something where the ask is smaller, but if there's a group of six friends, say, and only one person is vegan, the ask that everyone only eat vegan options every time the group meets is not going to engender goodwill for the vegan at the table, I think.

2nico stubler8mo
if i really wanted to be in that environment (i.e., feigning normalcy and pleasure while those in my company eat animal bodies [which, as argued in the article, i generally view as problematic]), i would attend without eating. in fact, i've done so myself on two occasions. even so, i think if one was open to practicing the pledge in some circumstances but not all, they should still practice the pledge in those limited circumstances! we are all imperfect, and i don't think we should allow a commitment to purity to prevent us from making positive progress. (in my eyes, i just don't see the "sacrifices" that come with the Pledge to outweigh the benefits, but can understand that many don't yet agree).

I don't believe this policy is viable for most people without suffering meaningful social isolation as a result, with limited benefits.

In particular, if one has few vegan friends, then this precludes participating in group dinners unless the venue is vegan, which may be a tough sell to insist upon every time there's a group dinner. If one is not in a major metro area with lots of vegan options, it may preclude eating out at all, as there may not be any exclusively vegan dining options.

3nico stubler8mo
hey Charles, as i clarify in section I, the Pledge does not require we only attend vegan venues. it simply requires that we only eat at vegan tables. this essentially leaves all the same venues friendly to vegan consumption friendly to Pledge practitioners as well.

Thanks for sharing this!

I don't think that I would have included this statement though:

I am sharing this in good faith that EAs who participate will donate whatever they earn beyond a reasonable value of their time to effective causes

For many EAs for whom this might be a good use of their time, especially those trying to position themselves for direct work, I would think donating this money will be a worse decision than using it to help themselves in their own efforts to do so.

Thank you for that Charles, I completely agree. By definition, EAs are aligned with the mission of doing good with the resources they have available. Investing in oneself can be one of the most high-leverage channels for increasing productivity and impact - I'll be more mindful about statements like this going forward :)

That's interesting, and if true a very disappointing and convenient delusion. Thanks!

I would think that for energy supply reasons Russia is a much more important partner for Germany than Ukraine, and that entirely explains German reluctance to help Ukraine, do you think this is incorrect?

I agree in general that depending on Russia for your energy is concerning. However, two points: (1) Given that it is possible to import LNG from the US (although more expensive), energy dependence on Russia is always in a sense chosen and needs itself to be explained. (2) This is just one data point, but at least in 2017 german dependence on gas was not higher than neighbouring countries. []
As weird as this sounds, I would hope that is the reason because it would mean Germany acts for understandable reasons. However, my discussions with other Germans and broader public sentiment suggest to me that Germans are insanely pacifistic. Even things like sending troops to stabilize a region when asked by the respective country are seen as critical by many. [] a German IR researcher/pundit seems to share my belief. Maybe you should check out her twitter.

Worth noting that Metaculus has the ability to record continuous distribution predictions, including both normal predictions and much more complicated distributions. E.g.

If you want to record your predictions for your own questions on Metaculus you can also create private questions here.

Understood, thanks. Yeah, this seems like a bit of an implausible just-so story to me.

This seems implausible to me, unless I'm misunderstanding something.

Are all such geniuses pre-1900 assumed to come from the aristocratic classes? Why?

If no, are there many counterexamples of geniuses in the lower classes being discovered in that time by existing talent spotting mechanisms?

If yes, why would this not be the case any more post-1900, or is the claim that it is still the case?

It's not exactly a nice conclusion. You'd need to think something like geniuses tend to come from families with genius potential, and these families also tend to be in the top couple of percent by income. It would line up with claims made by Gregory Clark in The Son Also Rises. To be clear, I'm not saying I agree with these claims or think this model is the most plausible one.

Despite your clarifications within the post here to say that we should grow the pie, and that CWRs are still underfunded, I find the zero-sum tone of much of the post (I.e. saying that we should do less CWR work and more other stuff) off putting and poorly supported.

It is not obvious to me that other areas such as those you mention can readily absorb that much extra funding that quickly, or that anyone is currently erring in their approach here not finding a particular intervention and funding CWRs instead.

I would guess that e.g. Open Phil are eager to fin... (read more)

8Steven Rouk7mo
I'm late to the discussion, but I might add that I have a hypothesis that we have heavily underinvested in finding, connecting, and supporting existing supporters of farmed animal welfare. One symptom of this would be a seeming lack of diversity in the funding opportunities. Another symptom might be difficulty finding these opportunities, even if they do exist, due to lack of social network connectivity (i.e. there are no easy ways to find opportunities outside of our well-connected local social networks). Thus, perhaps one of the first things we should invest more heavily in is building up this connective infrastructure for the movement. Lastly, I think the definition of "good opportunity" varies wildly, and a more holistic understanding of risk and uncertainty would nudge us in the direction of valuing strategic and tactical diversity as an inherent good, above and beyond any kind of impact evaluation or estimation. Thus, at an extreme, if you had 100% of funding invested in CWRs, then nearly any non-CWR opportunity would be seen as a good opportunity due to increasing the diversity of approaches. Of course, we don't have that extreme case of 100% investment in CWRs, but I think Kato's point is that a more pluralistic movement (i.e. a more diversified one than we currently have) does probably lead to higher impact, which would expand our definition of good opportunities to include things we might otherwise pass on. I believe Harish Sethu gave an excellent talk at the AR Conference a few years back using an apples and oranges market analogy to demonstrate this same kind of idea.
Nope, I think that is mostly (though not 100%) correct. My impression is that OpenPhil in particular is both more opportunity- and operationally-constrained than it is by funding. I do think though that they (and other funders) ought to do more active grant-making to try to identify non-CWR opportunities to fund (though they could very well already be doing this). I also agree with your point that few if any other approaches could absorb significant amounts of money currently (though I also expect that there's many orgs you could talk with trying more novel approaches who would disagree with us here, so perhaps I'm just not sufficiently aware of them). My point is more that many of the EA funders seem to have found a local optimum with CWRs, and if we put more efforts into exploring we would find other approaches that also look very promising. What I'd like to see is more work from EAA and funders to incubate and help build new approaches. I realize that that can be a difficult role for these organizations to play though.

I think there are several questions here, many of them vague, and if you sincerely want answers to them, you'd more usefully get answers if you wrote them separately and with a bit more context about what exactly you mean by the question.

1[comment deleted]10mo

I agree with the main point here, and I think it's a good one, but the headline's use of present tense is confusing, and implies to me that they are currently doing a good job in their capacity as a donor.

Holden Karnofsky and Elie Hassenfeld founded Givewell as a charity club at Bridgewater while they were working there, so Dalio definitely knows about EA.

Thanks for posting! I think sharing ideas like this is very valuable, and you give what looks to me like a good overview.

I think the "Gates and Wellcome already fund this" point is worth expanding on significantly before going any further.

How much do they fund it? What haven't they tried? These seem like important questions for gauging whether we'd expect an extra $X billion to be very useful here.

The Gates Foundation has donated a total of $4.1 billion to Gavi to-date, including $1.6 billion in 2020 for Gavi’s latest 2021-2025 strategic period. Through Gavi, the Gates Foundation has also funded AMCs, for example to expedite the development and availability of pneumococcal vaccines. In addition, the Foundation also funds vaccine development directly through its Global Health division, at an increasing rate. According to the latest published figures, the Gates Foundation donated $220 million to vaccine development in 2020, up from roughly $133 millio... (read more)

It makes sense, but it feels like a very narrow conception of what morality ought to concern itself with.

In your simulation example, I think it depends on whether we can be fully confident that simulated entities cannot suffer, which seems unlikely to me.

I thought the "why a Virtue Ethicist might care about consequence" section didn't really make a convincing argument. E.g.

"For them, the issue is not that the world is the kind of place where children drown, it is that the people in it are not the kinds of people who would save a drowning child. But it's still an issue! "

But what if the child is on its own and nobody has the opportunity to save the child, regardless of what kind of people they are? Is it OK for children to drown then?

2Raymond D1y
Kind of. From a virtue ethicist standpoint, things that happen aren't really good or bad in and of themselves. It's not bad for a child to drown, and it's not good for a child to be saved, because those aren't the sorts of things that can be good or bad. It seems very unintuitive if you look at it from a consequentialist standpoint, but it is consistent and coherent, and people who are committed to it find it intuitive. I guess an equivalent argument from the other side would be something like "Consequentialists think that virtues only matter in terms of their consequences. But if someone were unknowningly in a simulation, and they were really evil, and spent all their time drowning simulated children, would they not be a bad person?" Does that make sense?

Do you have an opinion on whether that bar that funders are currently holding opportunities to should be lowered?

What are the most relevant considerations here in your opinion?

I'm not a funder myself, so I don't have a strong take on this question. I think the biggest consideration might just be how quickly they expect to find opportunities that are above the bar. This depends on research progress, plus how quickly the community is able to create new opportunities, plus how quickly they're able to grow their grantmaking capacity. All the normal optimal timing questions [] also also relevant (e.g. is now an unusually hingey time or not; the expected rate of investment returns). The idea of waiting 10 years while you gradually build a team & do more research, and maybe double your money via investing, seems like a pretty reasonable strategy, unless you think now is very hingey. This is basically the strategy that Open Phil has taken so far. Though you could also argue that either now *is* hingey, and also that we're only deploying 1% of capital per year, which seems too low, which would both be reasons to deploy more rapidly.

If you can point out where I asked for "a Givewell style CEA" I might agree that it was an isolated demand for rigor.

I didn't do that, however. Instead, I asked for an attempt to make the case that it could be better than GiveDirectly - I didn't specify how one might make the case or any level of rigor at all.

What I was imagining was a basic back of the envelope sketch of how this intervention might be cost effective, which I don't think OP provided.

The supposed motivation for the post was EA having a funding overhang - in that context asking how it compares to another intervention which can potentially absorb near limitless amounts of money without diminishing returns seems totally reasonable to me.

Load More