27Joined Aug 2015


Sorted by New


Are there any sites set up to gamify your donations? I rather liked how the old GWWC site had little token pictures next to the organizations you donated to (vaguely felt like a "collect them all game") and the pie chart breakdown along with other nifty visualizations. The new pledge dashboard over at lacks all that and, for me, has reduced the pleasure I was having with organizing and tracking and thinking about my donation strategies. Though I can understand some people prefer the simplification, I don't, so are there any alternatives people like me that prefer a more gamified visualization-rich approach use?

Worth pointing out some academics think the parameters used in the Imperial model was too negative based on real world data we have. See Bill Gates's take on it:

Fortunately it appears the parameters used in that model were too negative. The experience in China is the most critical data we have. They did their "shut down" and were able to reduce the number of cases. They are testing widely so they see rebounds immediately and so far there have not been a lot. They avoided widespread infection. The Imperial model does not match this experience. Models are only as good as the assumptions put into them. People are working on models that match what we are seeing more closely and they will become a key tool. A group called Institute for Disease Modelling that I fund is one of the groups working with others on this. ~ Bill Gates from his Reddit AMA

that's a tribal war between economists and epidemiologists?


I guess you aren't up to speed with worm-wars. Things have gotten pretty tribal here with twitter wars between respected academics (made worse by a viral Buzzfeed article that arguably politicized the issue...), but nobody (to date) would argue EAs should stay out of deworming altogether because of that.

On the contrary precisely because of all this shit I'd think we need more EAs working on deworming.

Of course in the case of deworming it seems more clear that throwing in EAs will lead to a better outcome. This isn't nearly as clear when it comes to politics so I am with you that EAs should be more weary when it comes to recommending political/politicized work. Either way, I think ozymandias's point was that just like we don't tell EAs in deworming to leave the sinking ship, it also seems absurd to have a blanket ban on EA political/politicized recommendations. You don't want a blanket ban and don't mind EA endorsing political charities because as you've said you don't mind your favourite immigration charity being recommended. So the argument between you and ozymandias seems to mostly be about "to what degree."

And niether of you have actually operationalized what your stance is on "to what degee" and as such, in my view, this is why the argument between the two of you dwindled into the void.

I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities.

Make a series of videos about that instead then if it's so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.

Your video against GiveWell does not address or debunk any of GiveWell's evidence. It's a philosophical treatise on GiveWell's methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I've been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous

All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al's defense paper

I'd highly recommend reading both of these. They are both pretty bad ass.

I've for a long time seen things this way:

  • GiveWell: emphasizes effectiveness: the logic pull
  • TLYCS: emphasizes altruism: the emotion pull
  • GWWC: emphasizes the pledge: the act that unifies us as a common movement (or I think+feel it does)

One cute EA family.

We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals.

Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effectiveness assurance. This approach wouldn't grant a cost-effectiveness figure for all of GFI, but for one of their initiatives at least. Doing this should also drastically simplify your counterfactuals.

I've read the full report on GFI by ACE. Both it and this post suggest to me that a broad capture-everything approach is being undertaken by both ACE and OPP. I don't understand. Why do I not see a systematic list of all of GFIs projects and activities both on ACE's website and here and then an incremental systematic review of each one in isolation? I realize I am likely sounding like an obnoxious physicist encountering a new subject so do note that I am just confused. This is far from my area of expertise.

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

Regarding acceleration dynamics then, isn't it best to just model based on the most pessimistic conservative curve? It makes sense to me to think this would be the diminishing returns one. This also falls in line with what I know about clean meat. If we eventually do need (might as well assume we do for sake of being conservative) to simulate all elements of meat we'll also have to go beyond merely the scaffolding and growth medium problem and also include an artificial blood circulation system for the meat being grown. No such system yet exists and it seems reasonable to suspect that the closer we want to simulate meat precisely the more our scientific problems rise exponentially. So a diminishing returns curve is expected from GFI's impact - at least insofar as its work on clean meat is concerned.

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

Also, my experience has persistently been that the blindness vs trachoma example is quite off-putting in an "now this person who might have gotten into EA is going to avoid it" kind of way. So if we want more EAs, this example seems miserably inept at getting people into EA. I myself have stopped using the example in introductory EA talks altogether. I might be an outlier though and will start using it again if provided a good argument that it works well, but I suspect I'm not the only one that has seen better results introducing EAs by not bringing up this example at all. Now with all the uncertainty around it, it would seem that both emotions and numbers argue against the EA community using this example in introductory talks? Save it for the in-depth discussions that happen after an intro instead?

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Everyone is warm (±37°C, ideally), open-minded, reasonable and curious.

You sir, will be thoroughly quoted and requoted on this gem, lol. I commend this heartfelt post.

Load More