AGB

Comments

Money Can't (Easily) Buy Talent

I was surprised to discover that this doesn't seem to have already been written up in detail on the forum, so thanks for doing so. The same concept has been written up in a couple of other (old) places, one of which I see you linked to and I assume inspired the title:

Givewell: We can't (simply) buy capacity

80000 Hours: Focus more on talent gaps, not funding gaps

The 80k article also has a disclaimer and a follow-up posts that felt relevant here; it's worth being careful about a word as broad as 'talent':

Update April 2019: We think that our use of the term ‘talent gaps’ in this post (and elsewhere) has caused some confusion. We’ve written a post clarifying what we meant by the term and addressing some misconceptions that our use of it may have caused. Most importantly, we now think it’s much more useful to talk about specific skills and abilities that are important constraints on particular problems rather than talking about ‘talent constraints’ in general terms. This page may be misleading if it’s not read in conjunction with our clarifications.

richard_ngo's Shortform

But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?


I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when I cash out people's claims it usually turns out they are asserting 10x - 100x multipliers, not 100x - 1000x multipliers, let alone higher than that. It appears the divergence in our bottom lines is coming from my cosmopolitan values and low tolerance for act/omission distinctions, and hopefully we at least agree that if even the entrenched advocate doesn't actually think their cause is best under my values, I should just move on. 

As an aside, I know you wrote recently that you think more work is being done by EA's empirical claims than moral claims. I think this is credible for longtermism but mostly false for Global Health/Poverty. People appear to agree they can save lives in the deveoping world incredibly cheaply, in fact usually giving lower numbers than I think are possible. We aren't actually that far apart on the empirical state of affairs. They just don't want to. They aren't refusing to because they have even better things to do, because most people do very little. Or as Rob put it:

Many people donate a small fraction of their income, despite claiming to believe that lives can be saved for remarkably small amounts. This suggests they don’t believe they have a duty to give even if lives can be saved very cheaply – or that they are not very motivated by such a duty.

I think that last observation would also be my answer to 'what evidence do we have that we aren't in the second world?' Empirically, most people don't care, and most people who do care are not trying to optimise for the thing I am optimising for (in many cases it's debateable whether they are trying to optimise at all). So it would be surprising if they hit the target anyway, in much the same way it would be surprising if AMF were the best way to improve animal welfare.

richard_ngo's Shortform

I think we’re still talking past each other here.

You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’.

Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, I’m not seeing a clear argument for that. ‘Might have wildly large impacts’, ‘very rough estimates’, ‘policy can have enormous effects’...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (That’s not quite true; we should penalise rough things’ calculated EV more in high-uncertainty environments due to winners’ curse effects, but that’s secondary to my main point here).

Another way of putting it is that this is the difference between one’s confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with one’s limited information.

So concretely, I think it’s very likely that in 20 years I’ll think one of the >20 alternatives I’ve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty you’re highlighting. But I don’t know which one, and I don’t expect it to outperform 20x, so picking one essentially at random still looks pretty bad.

A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasn’t happened. That’s the relevance of those decisions to me, rather than any belief that they’ve done a secret Uber-Analysis.

richard_ngo's Shortform

Thanks for the write-up. A few quick additional thoughts on my end:

  • You note that OpenPhil still expect their hits-based portfolio to moderately outperform Givewell in expectation. This is my understanding also, but one slight difference of interpretation is that it leaves me very baseline skeptical that most 'systemic change' charities people suggest would also outperform, given the amount of time Open Phil has put into this question relative to the average donor. 
  • I think it's possible-to-likely I'm mirroring your 'overestimating how representative my bubble was' mistake, despite having explicitly flagged this type of error before because it's so common. In particular, many (most?) EAs first encounter the community at university, whereas my first encounter was after university, and it wouldn't shock me if student groups were making more strident/overconfident claims than I remember in my own circles. On reflection I now have anecdotal evidence of this from 3 different groups.
  • Abstaning on the 'what is the best near-term human-centric charity' question, and focusing on talking about the things that actually appear to you to be among the best options, is a response I strongly support. I really wish more longtermists took this approach, and I also wish EAs in general would use 'we' less and 'I' more when talking about what they think about optimal opportunities to do good. 
My mistakes on the path to impact

(Disclaimer: I am OP’s husband)

As it happens, there are a couple of examples in this post where poor or distorted versions of 80k advice arguably caused harm relative to no advice; over-focus on working at EA orgs due to ‘talent constraint’ claims probably set Denise’s entire career back by ~2 years for no gain, and a simplistic understanding of replaceability was significantly responsible for her giving up on political work.

Apart from the direct cost, such events leave a sour taste in people’s mouths and so can cause them to dissociate from the community; if we’re going to focus on ‘recruiting’ people while they are young, anything that increases attrition needs to be considered very carefully and skeptically.

I do agree that in general it’s not that hard to beat ‘no advice’, rather a lot of the need for care comes from simplistic advice’s natural tendency to crowd out nuanced advice.

I don’t mean to bash 80k here; when they become aware of these things they try pretty hard to clean it up, they maintain a public list of mistakes (which includes both of the above), and I think they apply way more thought and imagination to the question of how this kind of thing can happen than most other places, even most other EA orgs. I’ve been impressed by the seriousness with which they take this kind of problem over the years.

The Folly of "EAs Should"

The ‘any decent shot’ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say ‘probably stop’.

There’s a motte/bailey thing going on here, where the motte is something like ‘AI safety researchers probably do a lot more good than doctors’ and the bailey is ‘all doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because that’s how bad being a doctor is’.

I don’t think we are making the world a better place by doing the second; where possible we should stick to ‘probably’ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often don’t do in person.

AGB's Shortform

Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:

You say you aren't anchoring, in a world where we defaulted to expressing probability in 1/10^6 units called Ms I'm just left feeling like you would write "you should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.". So if it's not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?

My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% - 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair? 

The rest I see as an attempt to justify the extreme confidences inside the product, and I'll have to think about more. The following are gut responses:

I'm not sure which step of this you get off the boat for

I'm much more baseline cynical than you seem to be about people's willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, I'd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether that's correct or not, I don't think its wildly unusual among people who take climate change seriously*, and yet we almost certainly aren't doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.

(Again, the above is a gut response as opposed to a reasoned claim.)

Applying the rule of thumb for estimating lifetimes to "the human species" rather than "intelligent life" seems like it's doing a huge amount of work.

Yeah, Owen made a similar point, and actually I was using civilisation rather than 'the human species', which is 20x shorter still. I honestly hadn't thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.

*"The survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. "

AGB's Shortform

Thanks for this. I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead. 

On your first bullet:

You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for "modesty" - i.e. not ruling out very long futures - rests largely on model uncertainty...

...This insight that extremely low credences all-things-considered are often "forbidden" by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).

I'll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events. 

Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% - 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was 'survive the next year' if I wanted to make the requirements even more extreme. 

Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the 'correct' model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they won't die in the next second. 

AGB's Shortform

Thanks for the link. I did actually comment on that thread, and while I didn't have it specifically in mind it was probably close to the start of me asking questions along these lines.

How modest should you be?

This comment is great, strong-upvoted.

There are enough individual and practical considerations here (in both directions) that in many situations the actual thing I would advocate for is something like “work out what you would do with both approaches, check against results ‘without fear or favour’, and move towards whatever method is working best for you”.

Load More