Sorted by New

Topic Contributions


EA Creatives and Communicators Slack

Hi, I'm Leo, and I am running an EA-inspired charity evaluator in China, helping Chinese donors give to the most effective charities. We have a Chinese-language blog on effective giving, where our team post effective giving-related content. May I ask to join the channel?

Despite billions of extra funding, small donors can still have a significant impact

Unfortunately, this post doesn't quite persuade me that small donors can be impactful compared to large donors. The gist of the post seems to be that, as long as there are professional EA fund managers, small donors may achieve a similar level of marginal impact. This seems clear enough. Since EA grant evaluators typically regrant unrestricted funding, they will just treat any dollar - whether from large or small donors - as the same. Everyone's allowed to save lives at$3000 per life. 

However, if the EA movement is asking the question 'if we needed X amount of dollars, who should we approach', would small donors still be the answer? I think this is the sort of 'impact' that people question, i.e. where do we expect impact to predominantly come from. Within EA, small donors make up a 1/10 of Good Venture + FTX. To be of comparable impact, small donors need to be 10x more effective. 

Of course, we also need to consider where the baseline is. Would a 1/10 impact compared to large donors be decent enough for small donors collectively? As a MOVEMENT that does the MOST good, should we see small donors that give to AMF as impactful because they save lives per $3000 - and that's very good for the world; or unimpactful as the number of people saved through such giving is expectedly much lower than what large donors are doing? These are probably the key considerations to determine how the impact of small donors should be viewed. Just discussing the marginal impact of small donors doesn't quite do it for me.

Cullen O'Keefe: The Windfall Clause — sharing the benefits of advanced AI

I thought this was an informative talk. I especially enjoyed the exposition of the issue of unequal distribution of gains from AI. However, I am not quite convinced a voluntary Windfall Clause for companies to sign up would be effective.  The examples you gave in the talk aren't quite cases where the voluntary reparation by companies come close to the level of contribution one would reasonably expect from them to address the damage and inequality the companies caused. I am curious, if the windfall issue is essentially one of oligopolistic regulation, since there's only a small number of them, would it be more effective to simply tax the few oligopolies, instead of relying on voluntary singups? Perhaps what we need is not a voluntary legally binding contract, but a legally binding contract, period, regardless of voluntariness of the companies involved?

What if you’re working on the wrong cause? Preliminary thoughts on how long to spend exploring vs exploiting.

Thank you for the post, it is certainly very interesting to read. I learned a lot.