A donor-pays philanthropy-advice-first model solves several of these problems.
This explains why SoGive adopted this model.
Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
> Would such a game "positively influence the long-term trajectory of civilization," as described by the Long-Term Future Fund? For context, Rob Miles's videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?
This struck me as strawmanning.
> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children's lives or provide cataract surgery to around 4000 people?
These are totally different modes of impact. I assume you could make this argument for any speculative work.
I'm more sympathetic to this, but I still didn't find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept "funds have an opportunity cost" (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn't a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
I also like grant evaluation, but I would flag that it's expensive, and often, funders don't seem very interested in spending much money on it.
Donors contribute to these funds expecting rigorous analysis comparable to GiveWell's standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.
I think it's important that the author had this expectation. Many people initially got excited about EA because of the careful, thoughtful analysis of GiveWell. Those who are not deep in the community might reasonably see the branding "EA Funds" and have exactly the expectations set out in this quote.
I'm working from brief conversations with the relevant experts, rather than having conducted in-depth research on this topic. My understanding is:
I guess in either case it's possible for the food/agriculture lobby to nonetheless recognise that alt proteins could be a threat to them and object. I don't know how common it is for this actually happen.
When advocating that governments invest more in alt proteins, the following angles are typically used:
I understand the latter two are generally popular with right-wing governments; either of these two positions can be advanced without referencing climate at all (which may be preferable in some cases for the reasons Ben outlines)
I can confirm that there exists at least NGO who has this type of risk on their radar. I don't want to say too much until we have gone through the appropriate processes for publishing our notes from speaking with them.
If any donors want to know more, feel free to reach out directly and I can tell you more.
An application I was expecting you to mention was longer term forecasts. E.g. if there was a market about, say, something in 2050, for example, the incentives for forecasters are perhaps less good, because the time until resolution is so long. But a "chained" forecast capturing something like "what will next year's forecast say" (and next year's forecast is about the following year's forecast, and so until you hit 2050, when it resolves to the ground truth).
This assumes that forecasters are less effective when it comes to markets which don't resolve for a long time.
In 2020, we at SoGive were excited about funding nuclear work for similar reasons. We thought that the departure of the MacArthur foundation might have destructive effects which could potentially be countered with an injection of fresh philanthropy.
We spoke to several relevant experts. Several of these were with (unsurprisingly) philanthropically funded organisations tackling the risks of nuclear weapons. Also unsurprisingly, they tended to agree that donors could have a great opportunity to do good by stepping in to fill gaps left by MacArthur.
There was a minority view that this was not as good an idea as it seemed. This counterargument was MacArthur had left for (arguably) good reasons. Namely that after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested. I understood these comments to be the perspectives of commentators external to MacArthur (i.e. I don't think anyone was saying that MacArthur themselves believed this, and we didn't try to work out whether MacArthur themselves believed this).
Under this line of thinking, some "creative destruction" might be a positive. On the one hand, we risk losing some valuable institutional momentum, and perhaps some talented people. On the other hand, it allows for fresh ideas and approaches.
I don't think bringing the ISS down in a controlled way is because of the risk that it might hit someone on earth, or because of "the PR disaster" of us "irrationally worrying more about the ISS hitting our home than we are getting in their car the next day".
Space debris is a potentially material issue.
The geopolitics of space debris gets complicated.
I haven't done a cost-effectiveness analysis to justify whether $1bn is a good use of that money, but I think it's more valuable than this article seems to suggest.