CEA has now confirmed that Miri was correct to understand their budget - not EVF's budget - as around $30m.
In terms of things that would have helped when I was younger, I'm pretty on board with GWWC's new community strategy,[1] and Grace's thoughts on why a gap opened up in this space. I was routinely working 60-70 hour weeks at the time, so doing something like an EA fellowship would have been an implausibly large ask and a lot of related things seem vibed in a way I would have found very offputting. My actual starting contact points with the EA community consisted of no-obligation low-effort socials and prior versions of EA Global.
In terms of things now,...
I even explicitly said I am less familiar with BP as a debate format.
The fact that you are unfamiliar with the format, and yet are making a number of claims about it, is pretty much exactly my issue. Lack of familiarity is an anti-excuse for overconfidence.
The OP is about an event conducted in BP. Any future events will presumably also be conducted in BP. Information about other formats is only relevant to the extent that they provide information about BP.
I can understand not realising how large the differences between formats are initially, and so a...
Finally, even after a re-read and showing your comment to two other people seeking alternative interpretations, I think you did say the thing you claim not to have said. Perhaps you meant to say something else, in which case I'd suggest editing to say whatever you meant to say. I would suggest an edit myself, but in this case I don't know what it was you meant to say.
I've edited the relevant section. The edit was simply "This is also pretty common in other debate formats (though I don't know how common in BP in particular)".
...By contrast, criticisms I think
You did give some responses elsewhere, so a few thoughts on your responses:
But this is really far from the only way policy debate is broken. Indeed, a large fraction of policy debates end up not debating the topic at all, but end up being full of people debating the institution of debating in various ways, and making various arguments for why they should be declared the winner for instrumental reasons. This is also pretty common in other debate formats.
(Emphasis added). This seems like a classic case for 'what do you think you know, and how do you think yo...
Thanks for this, pretty interesting analysis.
Every time I come across an old post in the EA forum I wonder if the karma score is low because people did not get any value from it or if people really liked it and it only got a lower score because fewer people were around to upvote it at that time.
The other thing going on here is that the karma system got an overhaul when forum 2.0 launched in late 2018, giving some users 2x voting power and also introducing strong upvotes. Before that, one vote was one karma. I don't remember exactly when the new system came...
I think these concerns are all pretty reasonable, but also strongly discordant with my personal experience, so I figured it would help third parties if I explained the key insights/skills I think I learned or were strongly reinforced by my debating experience.
Three notable caveats on that experience:
I don't have time to respond in super much depth because of a bunch of competing commitments but I want to say that all of these are good points and I appreciate you making them.
So taking a step back for a second, I think the primary point of collaborative written or spoken communication is to take the picture or conceptual map in my head and put it in your head, as accurately as possible. Use of any terms should, in my view, be assessed against whether those terms are likely to create the right picture in a reader's or listener's head. I appreciate this is a somewhat extreme position.
If everytime you use the term heavy-tailed (and it's used a lot - a quick CTRL + F tells me it's in the OP 25 times) I have to guess from context wh...
Briefly on this, I think my issue becomes clearer if you look at the full section.
If we agree that log-normal is more likely than normal, and log-normal distributions are heavy-tailed, then saying 'By contrast, [performance in these jobs] is thin-tailed' is just incorrect? Assuming you meant the mathematical senses of heavy-tailed and thin-tailed here, which I guess I'm not sure if you did.
This uncertainty and resulting inability to assess whether this section is true or false obviously loops back to why I would prefer not to use the term 'heavy-tailed' at...
Hi Max and Ben, a few related thoughts below. Many of these are mentioned in various places in the doc, so seem to have been understood, but nonetheless have implications for your summary and qualitative commentary, which I sometimes think misses the mark.
I want to push back against a possible interpretation of this moderately strongly.
If the charity you are considering starting has a 40% chance of being 2x better than what is currently being done on the margin, and a 60% chance of doing nothing, I very likely want you to start it, naive 0.8x EV be damned. I could imagine wanting you to start it at much lower numbers than 0.8x, depending on the upside case. The key is to be able to monitor whether you are in the latter case, and stop if you are. Then you absorb a lot more money in the 40% case, and the actu...
Evidence Action are another great example of "stop if you are in the downside case" done really well.
I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you 'recruit' update with you, you presumably did much more good via recruitment than via direct work.
Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/improve particularly fast. By contrast, recruitment (o...
+1. A short version of my thoughts here is that I’d be interested in changing the EA name if we can find a better alternative, because it does have some downsides, but this particular alternative seems worse from a strict persuasion perspective.
Most of the pushback I feel when talking to otherwise-promising people about EA is not really as much about content as it is about framing: it’s people feeling EA is too cold, too uncaring, too Spock-like, too thoughtless about the impact it might have on those causes deemed ineffective, too naive to realise the imp...
I think similar adjustments should be made if you are extrapolating to crimes with very different prevalence. For example, the US murder rate is 4-5x that of the UK, but I wouldn’t expect the US to have that many more bike thefts.
Proxy seems fine if you’re focused on which country/city/etc. has higher overall crime, rather than estimating magnitude.
(FWIW, attempt at Googling the above suggest ~300k bike thefts per year in UK versus 2m in US, US population 5x bigger so that’s only 1.33x the UK rate. A quick check on bicycle sales in the two countries does n...
(Arguably nitpicking, in the sense that I suspect this would not change the bottom line, posted because the use of stats here raised my eyebrows)
...For some calibration, risk of drug abuse, which is a reasonable baseline for other types of violent behavior as well, is about 2-3x in adopted children. This is not conditioning on it being a teenager adoption, which I expect would likely increase the ratio to more something like 3-4x, given the additional negative selection effects.
Sibling abuse rates are something like 20% (or 80% depending on your definit
I agree with a lot of this, and I appreciated both the message and the effort put into this comment. Well-substantiated criticism is very valuable.
I do want to note that GWWC being scaled back was flagged elsewhere, most explicitly in Ben Todd's comment (currently 2nd highest upvoted on that thread). But for example, Scott's linked reddit comment also alludes to this, via talking about the decreased interest in seeking financial contributions.
But it's true that in neither case would I expect the typical reader to come away with the impression that a ...
With low confidence, I think I agree with this framing.
If correct, then I think the point is that seeing us at an 'early point in history' updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.
...You wake up in a mysterious box, and hear the booming voice of God:
“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.
If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.
To get int
Weirdly, I found this post a bit 'too meta', in the sense that there are a lot of assertions and not a lot of effort to provide evidence or otherwise convince me that these claims are actually true. Some claims I agree with anyway (e.g. I think you can reasonably declare political feasibility 'out-of-scope' in early-stage brainstorming), some I don't. Here's the bit that my gut most strongly disagrees with:
...A good test is to ask, when right things are done on the margin, what happens? When we move in the direction of good policies or correct statements, how
This has been a philosophical commitment since the early days of EA, yet information on how we (or the charities we prioritize) actually confirm with recipients that our programs are having the predicted positive impact on them receives, AFAICT, little attention in EA.
[Within footnote] As an example, after ten minutes of searching I could not find information on GiveWell's overall view on this subject on their website.
FWIW, the most closely related Givewell article I'm aware of is How not to be a "white in shining armor". Relevant excerpts (emphasis ...
Worth noting that you might get increased meaningfulness in exchange for the lost happiness
FWIW, I think this accidentally sent this subthread off on a tangent because of the phrasing of 'in exchange for the lost happiness'.
My read of the stats, similar to this Vox article and to what Robin actually said, is that people with children (by choice) are neither more nor less happy on average than childless people (by choice), so any substantial boost to meaning should be seen as a freebie, rather than something you had to give up happiness for.
I think th...
Whether this is a ‘good’ answer would depend on your audience, but I think one true answer from a typical EA would be ‘I care about those things too, but I think that the global poor/nonhuman animals/future generations are even more excluded from decision-making (and therefore ignored) than POC/women/LGBT groups are, so that’s where I focus my limited time and money’.
I don’t actually think the cause area challenge is quite what is going on here; I can easily imagine advancing those things being considered cause areas if they had a stronger case.
But also, I think a lot of people that end up at HLS don't think in those sort of Marxist/socialist class terms, but rather just have a sort of strong Rawslian egalitarianism commitment.
I also think many people at HLS are hilariously unaware of their class privilege.
FWIW, I strongly agree with both of these statements for Oxbridge in the UK as well.
The latter I think is a combination of a common dynamic where most people think they are closer to the middle of the income spectrum than they are, plus a natural human tendency to focus on the areas where you are being treated poorly or unfairly over the areas where you are being treated well.
To this I would add:
Beware of the selection effect where I’d expect people with kids are less likely to come to meetups, less likely to post on this forum, etc. than EAs with overall-similar levels of involvement, so it can look like there are fewer than is actually the case, if you aren’t counting carefully.
For EA clusters in very-high-housing-cost areas specifically (Milan mentioned the Bay), I wouldn’t be surprised if the broader similar demographic is also avoiding children, since housing is usually the largest direct financial cost of having children,...
Writing is just a lot more time-consuming to cover equivalent ground in my experience. I occasionally make the mistake of getting into multi-hour text conversations with people, and almost invariably look back and think we could have covered the same ground in a phone call lasting <25% as long.
Scattered thoughts on this, pointing in various directions.
TL;DR: Measuring and interpreting movement growth is complicated.
Things I'm relatively confident about:
How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land - particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if that's all we're talking about I'd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I don't think 'risks' is q...
Your comment reminded me of this post, whose ideas I like as a starting point for handling this type of question:
https://forum.effectivealtruism.org/posts/DYr7kBpMpmbygBiEq/the-privilege-of-earning-to-give
Not sure if you were referring to that particular post or the whole sequence. If I follow it correctly, I think that particular post is trying to answer the question 'how can we plausibly impact the long-term future, assuming it's important to do so'. I think it's a pretty good treatment of that question!
But I wouldn't mentally file that under cluelessness as I understand the term, because that would also be an issue under ordinary uncertainty. To the extent you explain how cluelessness is different to garden-variety uncertainty and why we can't deal with ...
I think if you reject incomparability, you're essentially assuming away complex cluelessness and deep uncertainty.
That's really useful, thanks, at the very least I now feel like I'm much closer to identifying where the different positions are coming from. I still think I reject incomparability; the example you gave didn't strike me as compelling, though I can imagine it compelling others.
...So, while I might just pick an option if forced to choose between A, B and indifferent, it doesn't reveal a ranking, since you've eliminated the option I'd want to g
Thanks again. I think my issue is that I’m unconvinced that incomparability applies when faced with ranking decisions. In a forced choice between A and B, I’d generally say you have three options: choose A, choose B, or be indifferent.
Incomparability in this context seems to imply that one could be indifferent between A and B, prefer C to A, yet be indifferent between C and B. That just sounds wrong to me, and is part of what I was getting at when I mentioned transitivity, curious if you have a concrete example where this feels intuitive?
For the second hal...
I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than...
A quibble on the masks point because it annoys me every time it's brought up. As you say, it's pretty easy to work out that masks stop an infected person from projecting nearly as many droplets into the air when they sneeze, cough, or speak, study or no study. But virtually every public health recommendation that was rounded off as 'masks don't work' did in fact recommend that infected people should wear masks. For example, the WHO advice that the Unherd article links to says:
...Among the general public, persons with respiratory symptoms or those caring for C
I realize that this is kind of a tangent to your tangent, but I don't think the general conjunction of (Western) expert views in 2020 was particularly defensible. Roughly speaking, the views (that I still sometimes hear it parroted by Twitter folks) were something like
...
- For most respiratory epidemics, (surgical) masks are effective at protecting wearers in medical settings.
- They are also effective as a form of source control in medical settings.
- They should be effective as a form of source control in community transmission.
- However, there is i
I certainly wouldn't walk on by, but that's mainly due to a mix of factoring in moral uncertainty (deontologists would think me the devil) and not wanting the guilt of having walked on by.
This makes some sense, but to take a different example, I've followed a lot of the COVID debates in EA and EA-adjacent circles, and literally not once have I seen cluelessness brought up as a reason to be concerned that maybe saving lives via faster lockdowns or more testing or more vaccines or whatever is not actually a good thing to do. Yet it seems obvious that some le...
I'm claiming the latter, yes. I do agree it's hard to prove, but I place high subjective credence (~88%) on it. Put simply, if I can directly observe factors that would tend to lower the representation of WEIRD ethnic minorities, I don't necessarily need to have an estmate of the percentages of WEIRD people who are ethnic minorities, or even of the percentage of people in EA who are from ethnic minorities. I only need to think that the factors are meaningful enough to lead to meaningful differences in representation, and not being offset by comparably-mean...
Medium-term indirect impacts are certainly worth monitoring, but they have a tendency to be much smaller in magnitude than primary impacts being measured, in which case they don’t pose much of an issue; to be best of my current knowledge carbon emissions from saving lives are a good example of this.
Of course, one could absolutely think that a dollar spent on climate mitigation is more valuable than a dollar spent saving the lives of the global poor. But that’s very different to the cluelessness line of attack; put harshly it’s the difference between choosi...
Thanks for the response, but I don't think this saves it. In the below I'm going to treat your ranges as being about the far future impacts of particular actions, but you could substitute for 'all the impacts of particular actions' if you prefer.
In order for there to be useful things to say, you need to be able to compare the ranges. And if you can rank the ranges ("I would prefer 2 to 1" "I am indifferent between 3 and 4", etc.), and that ranking obeys basic rules like transitivity, that seems equivalent to collapsing the all the ranges to single numbers....
Just to be clear I also think that we can tractably influence the far future in expectation (e.g. by taking steps to reduce x-risk). I'm not really sure how that resolves things.
If you think you can tractably impact the far future in expectation, AMF can impact the far future in expectation. At which point it's reasonable to think that those far future impacts could be predictably negative on further investigation, since we weren't really selecting for them to be positive. I do think trying to resolve the question of whether they are negative is probably a...
I'm not sure how to parse this 'expectation that is neither positive nor negative or zero but still somehow impacts decisions' concept, so maybe that's where my confusion lies. If I try to work with it, my first thought is that not giving money to AMF would seem to have an undefined expectation for the exact same reason that giving money to AMF would have an undefined expectation; if we wish to avoid actions with undefined expectations (but why?), we're out of luck and this collapses back to being decision-irrelevant.
I have read the paper. I'm surprised yo...
So yes we are in fact predictably influencing the far future by giving to AMF, in that we know we will be affecting the number of people who will live in the future. However, I wouldn't say we are influencing the far future in a 'tractable way' because we're not actually making the future better (or worse) in expectation
If we aren't making the future better or worse in expectation, it's not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Clue...
If we aren't making the future better or worse in expectation, it's not impacting my decision whether or not to donate to AMF. We can then safely ignore complex cluelessness for the same reason we would ignore simple cluelessness.
Saying that the long-run effects of giving to AMF are not positive or negative in expectation is not the same as saying that the long-run effects are zero in expectation. The point of complex cluelessness is that we don't really have a well-formed expectation at all because there are so many forseeable complex factors at play.
In s...
I think there's a difference between the muddy concept of 'cause areas' and actual specific charities/interventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but I'd be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of ...
If we have good reason to expect important far future effects to occur when donating to AMF, important enough to change the sign if properly included in the ex ante analysis, that is equivalent to (actually somewhat stronger than) saying we can tractably influence the far future, since by stipulation AMF itself now meaningfully and predictably influences the far future. I currently don't think you can believe the first and not the second, though I'm open to someone showing me where I'm wrong.
There's an important and subtle nuance here.
Note that complex cluelessness only arises when we know something about how the future will be impacted, but don't know enough about these foreseeable impacts to know if they are net good or bad when taken in aggregation. If we knew literally nothing about how the future would be impacted by an intervention this would be a case of simple cluelessness, not complex cluelessness, and Greaves argues we can ignore simple cluelessness.
What Greaves argues is that we don't in fact know literally nothing about the long-ru...
FWIW, I don’t think your argument goes through for ethnic diversity either; EA is much whiter than its WEIRD base. I agree aiming to match the ethnic diversity of the world would be a mistake.
(Disclaimer: Not white)
Spitballing here, but have you considered putting some thoughts to this effect on your website? Currently, the relevant part of the 80k website reads as follows.
...Why wasn’t I accepted?
Unfortunately, due to overwhelming demand, we can’t advise everyone who applies. However, we’re confident that everyone who is reading this has what it takes to lead a fulfilling, high impact career. Our key ideas series contains lots of our best advice on this topic – we hope you’ll find it useful.
If you’re thinking of re-applying, you can improve your chances by:
- Reading our
...Many of the considerations regarding the influence we can have on the deep future seem extremely hard, but not totally intractable, to investigate. Offering naive guestimates for these, whilst lavishing effort to investigate easier but less consequential issues, is a grave mistake. The EA community has likely erred in this direction.
***
Yet others, those of complex cluelessness, do not score zero on ‘tractability’. My credence in “economic growth in poorer countries is good for the longterm future” is fragile: if I spent an hour (or a week, or a decade) mul
Belatedly:
I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particu...
Great comment. I agree with most of what you've said. Particularly that trying to uncover if donating to AMF is going to be a great way to improve the long-run future seems a fools errand.
This is where my quibble comes in:
...If cluelessness arguments are intended have impact on the actual people donating to short term interventions as a primary form of doing good, they need to engage with the actual disagreements those people have, namely the questions of whether we can actually predict the size/direction of the longterm consequences despite the natural lack
For what it's worth, I think it's plausible that some interventions chosen for their short-term effects may be promising candidates for longtermist interventions. If you thought that s-risks were important and that larger moral circles mitigate s-risks, then plant-based and cultured animal product substitutes might be promising, since these seem most likely to shift attitudes towards animals the most and fastest, and this would (hopefully) help make the case for wild animals and artificial sentience mext. Maybe direct advocacy for protections for artificia...
So my first reaction to the Youth Ministry Adherence data was the basically the opposite of yours, in that I looked at it and thought 'seems like they are doing a (slightly) better job of retention'. Reviewing where we disagree, I think there's a tricky thing here about distinguishing between 'dropout' rates and 'decreased engagement' rates. Ben Todd's estimates which you quote are explicitly trying to estimate the former, but when you compare to:
those listed as “engaged disciples” who continue to self-report as “high involvement”
...I think you might end u...
Cool series, thanks for sharing on the forum. One nitpick:
ACE estimates that the average vegetarian stays vegetarian for 3.9-7.2 years, implying a five-year dropout rate of 14-26%.
I'm not sure how your rate is being calculated from ACE's figures here, but at first pass it seems wrong? Since 5 years is within but slightly towards the lower end of the range given for how long the average vegetarian stays vegetarian, I'd assume we'd end up with something more like a ~45% five-year dropout rate. By contrast, a 14-26% five-year dropout rate would suggest that &...
Thank you for this! This is not the kind of post that I expect to generate much discussion, since it's relatively uncontroversial in this venue, but is the kind of thing I expect to point people to in future.
I want to particularly draw attention to a pair of related quotes partway through your piece:
...I've tried explaining the case for longtermism in a way that is relatively free of jargon. I've argued for a fairly minimal version — that we may be able to influence the long-run future, and that aiming to achieve this is extremely good and important. Th
I was surprised to discover that this doesn't seem to have already been written up in detail on the forum, so thanks for doing so. The same concept has been written up in a couple of other (old) places, one of which I see you linked to and I assume inspired the title:
Givewell: We can't (simply) buy capacity
80000 Hours: Focus more on talent gaps, not funding gaps
The 80k article also has a disclaimer and a follow-up post that felt relevant here; it's worth being careful about a word as broad as 'talent':
...Update April 2019: We think that our use of the term ‘t
But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?
I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when ...
I think we’re still talking past each other here.
You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’.
Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Curre...
Not the main point of your post, but tax deductibility is a big deal in the UK as well, at least for higher earners; once you earn more than £50k donations are deductible at a rate of at least 40%, i.e. £60 becomes £100.