This project seems relevant - an app to track COVID-19. Especially given the lack of testing in e.g. the US (and anecdotal evidence from my own social circle suggests it's already more prevalent than official statistics suggest) simple data-gathering seems relevant.
It might be helpful if you linked to specific parts of the longer series which addressed this argument, or summarized the argument. Even if it would be good for people to read the entire thing it hardly seems like something we can expect as a precondition.
Whether you think it's a rationalization or not, the claim in the OP is misleading at best. It sounds like you're paraphrasing them as saying that they don't recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn't backed up by anything in the OP.
The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
I've talked with few people who seemed under the impression that the EA orgs making...
Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of po...
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it...
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does...
Second, as long as your actions impact everything, a totalizing metric might be useful.
Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?
actual totalitarian governments have existed and they have not used such a metric (AFAIK).
Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.
Some claim to, others don't.
I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the ti...
"Compared to a Ponzi scheme" seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.
Maybe my criticism would have been better received if I'd left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.
retry the original case with double jeopardy
This sort of framing leads to publication bias. We want double jeopardy! This isn't a criminal trial, where the coercive power of a massive state is being pitted against an individual's limited ability to defend themselves. This is an intervention people are spending loads of money on, and it's entirely appropriate to continue checking whether the intervention works as well as we thought.
As I understand the linked page, it's mostly about retroactive rather than prospective observational studies, and usually for individual rather than population-level interventions. A plan to initiate mass bednet distribution on a national scale is pretty substantially different from that, and doesn't suffer from the same kind of confounding.
Of course it's mathematically possible that the data is so noisy relative to the effect size of the supposedly most cost-effective global health intervention out there, that we shouldn't expect the impact of the interve...
If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.
It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn't strong evidence of no effect...
One simple example: https://en.wikipedia.org/wiki/Grade_inflation
More generally, things like the profusion of makework designed to facially resemble teaching, instead of optimizing for outcomes.
We should also expect this to mean that countries such as Australia and China that heavily weight a national exam system when advancing students at crucial stages will have less corrupt educational systems than countries like the US which weight locally assessed factors like grades heavily.
(Of course, there can be massive downsides to standardization as well.)
I think the thing to do is try to avoid thinking of "bureaucracy" as a homogeneous quantity, and instead attend to the details of institutions involved. Of course, as a foreigner with respect to every country but one's own, this is going to be difficult to evaluate when giving abroad. This is one of the many reasons why giving effectively on a global scale is hard, and why it's so important to have information feedback of the kind GiveDirectly is working on. Long-term follow-up seems really important too, and even then there's going to be some substantial justified uncertainty.
There's an implied heuristic that if someone makes an investment that gives them an income stream worth $X, net of costs, then the real wealth of their society increases by at least $X. On this basis, you might assume that if you give a poor person cash, and they use it to buy education, which increases the present value of their children's earnings by $X, then you've thereby added $X of real wealth to their country.
I am saying that we should doubt the premise at least somewhat.
To support a claim that this applies in "virtually all" cases, I'd want to see more engagement with pragmatic problems applying modesty, including:
Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.
This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.
Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.
[...]
...Later, we realised that understandin
On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.
If I try to steelman the argument, it comes out something like:
Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.
I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.
Thanks for writing this! It's really helpful to have the basics of what the medical community knows.
I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.
So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right....
Kerry,
I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.
My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.
I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around...
I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.
I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.
Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.
There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?
Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.
But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.
It's not obvious to me that the "near" bias about one's own health is generically worse than our "far" bias about what to do about the health of people far away. For instance, we might have a bias towards action that's not shared by, e.g., the children who feel sick after their worm chemo, or getting bit by mosquitos through their supposedly mosquito-proof...
It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.
I see people with a broadly utilitarian outlook doing this a lot, perhaps because people...
You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.
Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.
Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the ef...
EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
I...
SlateStarScratchpad claims (with more engagement here) that the literature mainly shows that parents who like hitting their kids or beat them severely do poorly, and that if you control for things like heredity or harsh beatings it’s not obvious that mild corporal punishment is more harmful than other common punishments.
My best guess is that children are very commonly abused (and not just by parents - also by schools), but I don't think the line between physical and nonphysical punishments is all that helpful for understanding the true extent of this.
I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.
The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs ...
The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.
I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.
This is consistent with the optionality story in the beta launch post:
...If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that t
On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best...
Or to simply say "for global poverty, we can't do better than GiveWell so we recommend you just give them the money".
I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused).
I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradict...
Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.
Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.
Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!
Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.
I tried to write more directly about the mindset problem here:
http://benjaminrosshoffman.com/humility-argument-honesty/
http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/
If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they'd be happy if alternatives emerged.
It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it'll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.
I think sufficient evidence hasn't been presented, in large part because the argument has been tacit rather than overt.
I know of one related effort: https://twitter.com/webdevMason/status/1234216664113135616
http://www.coepi.org/