All of BenHoffman's Comments + Replies

1
tina
4y
CoEpi is a great team too! We're helping each other out now with a lot of cross-communication between our slack channels.

This project seems relevant - an app to track COVID-19. Especially given the lack of testing in e.g. the US (and anecdotal evidence from my own social circle suggests it's already more prevalent than official statistics suggest) simple data-gathering seems relevant.

2
Linch
4y
Are you referring to https://www.covid19risk.com/ , or something else?

It might be helpful if you linked to specific parts of the longer series which addressed this argument, or summarized the argument. Even if it would be good for people to read the entire thing it hardly seems like something we can expect as a precondition.

[anonymous]5y14
0
0

Whether you think it's a rationalization or not, the claim in the OP is misleading at best. It sounds like you're paraphrasing them as saying that they don't recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn't backed up by anything in the OP.

The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.

8
Milan_Griffes
5y
For reference, the landing page for Ben's series.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.

I've talked with few people who seemed under the impression that the EA orgs making... (read more)

5
kbog
5y
It is perceived, that doesn't mean the perception is beneficial. It's better if people perceive EA as having weaker philosophical claims, like maximizing welfare in the context of charity, as opposed to taking on the full utilitarian theory and all it says about trolleys and torturing terrorists and so on. Quantitative optimization should be perceived as a contextual tool that comes bottom-up to answer practical questions, not tied to a whole moral theory. That's really how cost-benefit analysis has already been used.
Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of po
... (read more)
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it
... (read more)
5
kbog
5y
It's different because they have the right approach on how to compromise. They work on compromises that are grounded in political interests rather than moral values, and they work on compromises that solve the task at hand rather than setting the record straight on everything. And while they have failures, the reasons for those failures are structural (problems of commitment, honesty, political constraints, uncertainty) so you cannot avoid them just by changing up the ideologies.
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does
... (read more)
6
kbog
5y
It wasn't obvious to make GiveWell, until people noticed a systematic flaw (lack of serious impact analysis) that warranted a new approach. In this case, we would need to identify a systematic flaw in the way that regular diplomacy and deterrence efforts are approaching things. Professionals do regard North Korea as a threat, but not in a naive "oh they're just evil and crazy aggressors" sort of sense, they already know that deterrence is a mutual problem. I can see why one might be cynical about US government efforts, but there are more players besides the US government. The Logan Act doesn't present an obstacle to aid efforts. You're not intervening in a dispute with the US government, you're just supporting the foreign country's local programs. EAs have a perfectly good working understanding of the microeconomic impacts of aid. At least, Givewell etc do. Regarding macroeconomic and institutional effects, OK not as much, but I still feel more confident there than I do when it comes to international relations and strategic policy. We have lots of economists, very few international relations people. And I think EAs show more overconfidence when they talk about nuclear security and foreign policy.
Second, as long as your actions impact everything, a totalizing metric might be useful.

Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?

2
kbog
5y
It's conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.
actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.

2
kbog
5y
Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.

Some claim to, others don't.

I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the ti... (read more)

2
kbog
5y
OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there's just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn't mean they need to be identical.

"Compared to a Ponzi scheme" seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.

Maybe my criticism would have been better received if I'd left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.

-1
Evan_Gaensbauer
6y
[epistemic status: meta] Summary: Reading comments in this thread which are similar reactions I've seen you or other rationality bloggers receive from effective altruists on critical posts regarding EA, I think there is a pattern to how rationalists may tend to write on important topics that doesn't gel with the typical EA mindset. Consequentially, it seems the pragmatic thing for us to do would be to figure out how to alter how we write to get our message across to a broader audience. Upvoted. I don't if you've read some of the other comments in this thread. But some of the most upvoted ones are about how I need to change up my writing style. So unfortunate compressions of what I actually write aren't new to me, either. I'm sorry I compressed what you actually wrote. But even an accurate compression of what you actually wrote might make my comments too long for what most users prefer on the EA Forum. If I just linked to your original post, it would be too long for us to read. I spend more of my time on EA projects. If there were more promising projects coming out of the rationality community, I'd spend more time on them relative to how much time I dedicate to EA projects. But I go where the action is. Socially, I'm as if not more socially involved with the rationality community than I am with EA. From my inside view, here is how I'd describe the common problem with my writing on the EA Forum: I came here from LessWrong. Relative to LW, I haven't found how or what I write on the EA Forum to be too long. But that's because I'm anchoring off EA discourse looking like SSC 100% of the time. But since the majority of EAs don't self-identify as rationalists, and the movement is so intellectually diverse, the expectation is the EA Forum won't be formatted on any discourse style common to the rationalist diaspora. I've touched upon this issue with Ray Arnold before. Zvi has touched on it too in some of his blog posts about EA. A crude rationalist impression might be t

retry the original case with double jeopardy

This sort of framing leads to publication bias. We want double jeopardy! This isn't a criminal trial, where the coercive power of a massive state is being pitted against an individual's limited ability to defend themselves. This is an intervention people are spending loads of money on, and it's entirely appropriate to continue checking whether the intervention works as well as we thought.

As I understand the linked page, it's mostly about retroactive rather than prospective observational studies, and usually for individual rather than population-level interventions. A plan to initiate mass bednet distribution on a national scale is pretty substantially different from that, and doesn't suffer from the same kind of confounding.

Of course it's mathematically possible that the data is so noisy relative to the effect size of the supposedly most cost-effective global health intervention out there, that we shouldn't expect the impact of the interve... (read more)

If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn't strong evidence of no effect... (read more)

7
gwern
6y
You may find it hard to believe, but nevertheless, that is the fact: correlational results can easily be several times the true causal effect, in either direction. If you really want numbers, see, for example, the papers & meta-analyses I've compiled in https://www.gwern.net/Correlation on comparing correlations with the causal estimates from simultaneous or later conducted randomized experiments, which have plenty of numbers. Hence, it is easy for a causal effect to be swamped by any time trends or other correlates, and a followup correlation cannot and should not override credible causal results. This is why we need RCTs in the first place. Followups can do useful things like measure whether the implementation is being delivered, or can provide correlational data on things not covered by the original randomized experiments (like unconsidered side effects), but not retry the original case with double jeopardy.

One simple example: https://en.wikipedia.org/wiki/Grade_inflation

More generally, things like the profusion of makework designed to facially resemble teaching, instead of optimizing for outcomes.

We should also expect this to mean that countries such as Australia and China that heavily weight a national exam system when advancing students at crucial stages will have less corrupt educational systems than countries like the US which weight locally assessed factors like grades heavily.

(Of course, there can be massive downsides to standardization as well.)

0
Kirsten
6y
I'd find this pretty surprising based on my knowledge of the Canadian (Albertan) & British education systems. Does anyone have evidence for standardized exams decreasing "corruption"? (Ben, I'm not sure exactly what you meant by corruption here - do you mean grades that don't match ability, or lazy teaching, or something else?)

I think the thing to do is try to avoid thinking of "bureaucracy" as a homogeneous quantity, and instead attend to the details of institutions involved. Of course, as a foreigner with respect to every country but one's own, this is going to be difficult to evaluate when giving abroad. This is one of the many reasons why giving effectively on a global scale is hard, and why it's so important to have information feedback of the kind GiveDirectly is working on. Long-term follow-up seems really important too, and even then there's going to be some substantial justified uncertainty.

There's an implied heuristic that if someone makes an investment that gives them an income stream worth $X, net of costs, then the real wealth of their society increases by at least $X. On this basis, you might assume that if you give a poor person cash, and they use it to buy education, which increases the present value of their children's earnings by $X, then you've thereby added $X of real wealth to their country.

I am saying that we should doubt the premise at least somewhat.

For some balance, see Kelsey Piper's comments here - it looks like empirically, the picture we get from GiveDirectly is encouraging.

To support a claim that this applies in "virtually all" cases, I'd want to see more engagement with pragmatic problems applying modesty, including:

  • Identifying experts is far from free epistemically.
  • Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they're too stupid to notice the considerations for
... (read more)

Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.

This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.

1
Sindy_Li
7y
My personal take on the issue is that, the better we understand how the updating works (including how to select the prior), the more seriously we should take the results. Currently we don't seem to have a good understanding (e.g. see Dickens' discussion: the way of selecting the median based on Give Directly seems reasonable, but there doesn't seem to be a principled way of selecting the variance, and this seems to be the best effort at it so far), so these updating exercises can be used as heuristics but the results are not to be taken too seriously, and certainly not literally (together with the reason that input values are so speculative in some cases). This is just my personal view and certainly many people disagree. E.g. my team decided to use the results of Bayesian updating to decide on the grant recipient. My experience with the project lead me to be not very positive that it's worth investing too much in improving this quantitative approach for the sake of decision making, if one could instead spend time on gathering qualitative information (or even quantitative information that don't fit neatly in the framework of cost-effectiveness calculations or updating) that could be much more informative for decision making. This is along the lines of this post and seems to also fit the current approach of the Open Philanthropy Project (of utilizing qualitative evidence rather than relying on quantitative estimates). Of course this is all based on the current state of such quantitative modeling, e.g. how little we understand how updating works as well as how to select speculative inputs for the quantitative models (and my judgment about how hard it would be to try to improve on these fronts). There could be a drastically better version of such quantitative prioritization that I haven't been able to imagine. It could be very valuable to construct a quantitative model (or parts of one), think about the inputs and their values, etc., for reasons explained here. E.g.

Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.

[...]

Later, we realised that understandin

... (read more)
5
kokotajlod
7y
That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course--it's hard to give out a prize on the basis of raw hunches--but nevertheless we should work towards finding ways to avoid it. Kudos to OPP for being so transparent in their thought process though!

On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.

3
Julia_Wise
7y
A reading I found laid things out clearly: Utilitarians and disability activists: what are the genuine disagreements?
1
saulius
7y
The article Vollmer cites says: In this case that seems to be the substance of the criticism. You can't anticipate every counter-argument one could make when talking to bigger audiences, but this one is pretty common. It might be necessary to say Not sure it would help, it could be that such arguments trigger bad emotions for other reasons and the counter-arguments we hear are just rationalizations of those emotions. It does feel like a minefield. Therefore, when comparing any 2 charities while introducing someone (especially an audience) to EA, we must phrase it carefully and sensitively. BTW, I think there is something to learn from way Singer phrased it in the TED talk:

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

6
PeterSinger
7y
Why is the choice not directly comparable? If it were possible to offer a blind person a choice between being able to see, or having a guide dog, would it be so difficult for the blind person to choose? Still, if you can suggest better comparisons that make the same point, I'll be happy to use them.

I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.

Thanks for writing this! It's really helpful to have the basics of what the medical community knows.

I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.

So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right.... (read more)

1
Julia_Wise
7y
Yes, I think that's where some kind of an advance plan can be useful: "When I start acting like X, I want you to take step Y" or "When you act like X, I'm going to stop engaging with the conversation and start focusing on helping you get some rest, and we can write down where we were in the conversation and resume 48 hours later if you want."

Kerry,

I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around... (read more)

3
Kerry_Vaughan
7y
Hey, Ben. Just wanted to note that I found this very helpful. Thank you.

I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.

Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

3
AGB
7y
I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

It's not obvious to me that the "near" bias about one's own health is generically worse than our "far" bias about what to do about the health of people far away. For instance, we might have a bias towards action that's not shared by, e.g., the children who feel sick after their worm chemo, or getting bit by mosquitos through their supposedly mosquito-proof... (read more)

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people... (read more)

0[anonymous]7y
Yes I'm not sure I disagree with much of what you have said. I don't want my argument to be taken to show that we should ignore paternalism as a potentially important instrumental factor. Showing the implicaitons of paternalism as a non-instrumentally important goal does not show anything about the instrumental importance of paternalism. Paternalism might not count in favour of GD as an non-instrumental goal, but count in favour of it as an instrumental goal. It's important to separate these two types of concern. I do think some people would have the non-instrumental justification in mind, so it's important to get clear on that.

You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.

1[anonymous]7y
I think I agree with maybe having a sceptical prior for paternalistic interventions, but I'm unsure about how strong such a prior would be. The information on what has worked in the past would determine the prior I should have when assessing a new intervention. If I looked at all past public health interventions and paternalism was not correlated at all with quality of outcome, even correcting for reasonable unknown side-effects, then it seems like I should give paternalism very little weight when assessing a new intervention. My examples were a bit cherry-picked, but they do show that if you look at the tail of the distribution of interventions in terms of impact, they tend to be paternalistic. However, I suspect there is something of a correlation between paternalism and outcomes: I suspect nearly all or all of the ineffectual/harmful interventions have been paternalistic - playpump etc. This is borne out by the fact that GD is better than most other anti-poverty interventions. Then you have to take in the risk of hidden costs/harms, as you say. But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the ef... (read more)

0
DC
7y
'Do No Harm - on Problem Solving and Design' talks about fixer solutions vs. solver solutions. Its key points: Its concluding paragraph: Highly recommended read! :D
2[anonymous]7y
I agree that there might be instrumental concerns about paternalistic interventions, especially where we have limited information about how recipients act. However, these concerns do not always seem to be decisive about the effectiveness of interventions in terms of producing welfare. e.g. mandatory childhood vaccination is highly cost-effective notwithstanding its paternalism; same goes for tobacco taxes, mandatory seatbelt legislation, etc. When you look back at the most successful public health interventions, they have been at least as paternalistic as bednets and deworming - smallpox eradication, ORT, micronutrient foritification etc. This shows that paternalism isn't that reliable a marker of lack of effectiveness. Wrt deworming, the issue seems to stem from features particular to deworming, rather than the fact that it is paternalistic.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

I... (read more)

1
AGB
7y
Thanks for digging up those examples. I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making. This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of? This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'. We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

SlateStarScratchpad claims (with more engagement here) that the literature mainly shows that parents who like hitting their kids or beat them severely do poorly, and that if you control for things like heredity or harsh beatings it’s not obvious that mild corporal punishment is more harmful than other common punishments.

My best guess is that children are very commonly abused (and not just by parents - also by schools), but I don't think the line between physical and nonphysical punishments is all that helpful for understanding the true extent of this.

6
Paul_Christiano
7y
Scott links to this study, which is more convincing. They measure the difference between "physical mild (slap, spank)" and "physical harsh (use weapon, punch, kick)" punishment, with ~10% of children in the latter category. They consider children of twins to control for genetic confounders, and find something like a 0.2 SD effect on measures of behavioral problems at age 25. There is still confounding (e.g. households where parents beat their kids may be worse in other ways), and the effects are smaller and for rarer forms of punishment, but it is getting somewhere.

I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs ... (read more)

3
AGB
7y
If this is basically saying 'we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas', then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as 'EAs' represent a wide range of commitment levels. One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they'll see wildly different distributions of commitment and similarly differing representation of various cause areas. With that said, I'm not totally sure if that's the point you're making because my personal experience in London is that we've been going out of our way to make the above points for a while; what's an example of marketing which you think works to maintain a homogenous public image?

The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.

6
Kerry_Vaughan
7y
We didn't offer any alternative events during Elon's panel because we (correctly) perceived that there wouldn't be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers. We had to set up an overflow room for people that didn't make it into the main room during the Elon panel, and even the overflow room was standing room only. I think this is worth pointing out because of the proceeding sentence: The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1] I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person. However, with the exception of Bill Gates (who we tried to get), I don't know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call. Given that Elon was attending, I don't see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow. [1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.

I also originally saw the reply attributed to a different comment on Mobile.

-3[comment deleted]7y

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.

This is consistent with the optionality story in the beta launch post:

If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that t

... (read more)

On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best... (read more)

4
Elizabeth
7y
Only if GiveWell and the EA Fund are both supposed to be perfect expressions of Elie's values. GiveWell has a fairly specific mission which includes not just high expected value but high certainty (compared to the rest of the field, which is a low bar). EA Funds was explicitly supposed to be more experimental. Like you say below, if organizers don't think you can beat GiveWell, encourage donating to GiveWell.

Or to simply say "for global poverty, we can't do better than GiveWell so we recommend you just give them the money".

1
Peter Wildeford
7y
Agreed - it definitely seems reasonable to me, and very consistent with GiveWell's overall approach, that Elie sincerely believes that donating to AMF is the best use of funds.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused).

I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradict... (read more)

Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.

Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.

Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!

1
AGB
7y
That seems like a good use of the upvote function, and I'm glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts: 1. I don't think it's a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track 'I agree with this' extremely strongly, even though lots of people (myself included) agree that ideally they shouldn't. 2. In the hypothetical where there's lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence: *Not sure if 'forgot' is quite the right word here, just mirroring your description of my comment as 'reminding' you.

Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.

I tried to write more directly about the mindset problem here:

http://benjaminrosshoffman.com/humility-argument-honesty/

http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/

http://benjaminrosshoffman.com/against-responsibility/... (read more)

If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they'd be happy if alternatives emerged.

It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it'll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.

2
Michael_PJ
7y
Yes, in case it wasn't clear, I think I agree with many of your concrete suggestions, but I think the current situation is not too bad.

I think sufficient evidence hasn't been presented, in large part because the argument has been tacit rather than overt.

Load more