All of MichaelPlant's Comments + Replies

Thanks for this. I think this is very valuable and really appreciate this being set out. I expect to come back to it a few times. One query and one request from further work - from someone, not necessarily you, as this is already a sterling effort!

  1. I've heard Thorstad's TOP talk a couple of times, but it's now a bit foggy and I can't remember where his ends and yours starts. Is it that Thorstad argues (some version of) longtermism relies on the TOP thesis, but doesn't investigate whether TOP is true, whereas you set about investigating if it is true?

  2. T

... (read more)
9
David Rhys Bernard
5mo
Hi Michael, thanks for this. On 1: Thorstad argues that if you want to hold both claims (1) Existential Risk Pessimism - per-century existential risk is very high, and (2) Astronomical Value Thesis - efforts to mitigate existential risk have astronomically high expected value, then TOP is the most plausible way to jointly hold both claims. He does look at two arguments for TOP - space settlement and an existential risk Kuznets curve - but says these aren’t strong enough to ground TOP and we instead need a version of TOP that appeals to AI. It’s fair to think of this piece as starting from that point, although the motivation for appealing to AI here was more due to this seeming to be the most compelling version of TOP to x-risk scholars. On 2: I don’t think I’m an expert on TOP and was mostly aimed at summarising premises that seem to be common, hence the hedging. Broadly, I think you do only need the 4 claims that formed the main headings (1) high levels x-risk now, (2) significantly reduced levels of x-risk in the future, (3) a long and valuable / positive EV future, and (4) a moral framework that places a lot of weight on this future. I think the slimmed down version of the argument focuses solely on AI as it’s relevant for (1), (2) and (3), but as I say in the piece, I think there are potentially other ways to ground TOP without appealing to AI and would be very keen to see those articulated and explored more. (2) is the part where my credences feel most fragile, especially the parts about AI being sufficiently capable to drastically reduce other x-risks and misaligned AI, and AI remaining aligned near indefinitely. It would be great to have a better sense of how difficult various x-risks are to solve and how powerful an AI system we might need to near eliminate them. No unknown unknowns seems like the least plausible premise of the group, but its very nature makes it hard to know how to cash this out.

Hello Bob and team. Looking forward to reading this. To check, are you planning to say anything explicitly about your approach to moral uncertainty? I can't see anything directly mentioned in 5., which is where I guessed it would go. 

On that note Bob, you might recall that, a while back I mentioned to you some work I'm doing with a couple of other philosophers on developing an approach to moral uncertainty along these lines that will sometimes justify the practice of worldview diversification. That draft is nearly complete and your post series inspire... (read more)

4
Bob Fischer
6mo
Nice to hear from you, Michael. No, we don't provide a theory of moral uncertainty. We have thoughts, but this initial sequence doesn't include them. Looking forward to your draft whenever it's ready.

This report seems commendably thorough and thoughtful. Could you possibly spell out its implications for effect altruists though? I take it the conclusion is that humanity was most violent in the subsistence farming period, rather than before or after, but I'm not sure what to make of that. Presumably, it shows how violent people are changes quite radically in different contexts, so should I be reassured if, as seems likely, modern-type societies will continue? Returns to hunter-gathering and subsistence farming do not seem on the cards.

Sorry if I've missed something. But I reckoned that, if it was obvious to me, some others would missed it too.

[anonymous]6mo14
0
0

I think it's mainly relevant for the part of EA that is interested in long-run history and its implications for the long-term. There's also some stuff in the discussion about how the drivers of conflict are a small subset of sociopaths/malevolent people;  humanity is not innately violent even though we have high rates of violence relative to other primates. I think is relevant for the future of violence.

Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]

First, the piece you're referring to is a book review in an aca... (read more)

1
Jack Malde
6mo
I agree we should be more swayed by arguments than numbers - I feel like it was you who played the numbers game first so I thought I'd play along a bit. FYI I did reference that SEP article in my post and it says (emphasis mine):
9
Jack Malde
6mo
You say the following in the summary of the book section (bold part added by me): By including it in the 'summary' section I think you implicitly present this as a view Will espoused in the book - and I don't agree that he did. Sure, people talk about avoiding extinction quite a bit, but that isn't the only reason to care about existential risk, as I explain in my post. For example, you can want to prevent existential risks that involve locking-in bad states of the world in which we continue to exist e.g. an authoritarian state such as China using powerful AI to control the world.  One could say reducing x-risk from AI is the cause du jour of the longtermist community. The key point is that reducing x-risk from AI is still a valid priority (for longtermist reasons) if one accepts the intuition of neutrality. Accepting the intuition of neutrality would involve some re-prioritization within the longtermist community - say moving resources away from x-risks that are solely extinction risks (like biorisks?) and towards x-risks that are more (like s-risks from misaligned AI or digital sentience). I simply don't think accepting the intuition of neutrality is a "severe" challenge for longtermism, and I think it is clear Will doesn't think so either (e.g. see this).
6
Jack Malde
6mo
I lean towards thinking the following is unfair. If one were just to read WWOTF they would come away with an understanding of: * The intuition of neutrality - what it is, the fact that some people hold it, the fact that if you accept it you shouldn't care about losing future generations. * The non-identity problem - what it is and why some see it as an argument against being able to improve the future. * The repugnant conclusion - what it is, how some find it repugnant and why it is an argument against total utilitarianism. This is all Will explaining the 'other side'. Sure he's one-sided in the sense that he also explains why he disagrees with these arguments, but that seems fine to me. He's not writing a textbook. What would have been an issue is if he had, say, just explained total utilitarianism without also explaining the repugnant conclusion, the intuition of neutrality or the non-identity problem. Regarding the "polemical" description. I'm not really sure what you're getting at. Merriam-Webster defines a polemic as "an aggressive controversialist". Do you think Will was aggressive? As I say he presents 'the other side' while also explaining why he disagrees with it. I'm not really seeing an issue here.
7
Jack Malde
6mo
Thanks for this reply Michael! I'll do a few replies and understand that you don't want to get in a long back and forth so will understand if you don't reply further. Firstly, the following is all very useful background so I appreciate these clarifications: In light of this I think the wording "Plant presents a very one-sided analysis of the non-identity problem" is an unfair criticism. I'm still happy I wrote that section because I wanted to defend longtermism from your attack, but I should have framed it differently.

Yup, I'd be inclined to agree it's easier to ground the idea life is getting better for humans on objective measures. The is author's comparison is made in terms of happiness though:

This work draws heavily on the Moral Weight Project from Rethink Priorities and relies on the same assumptions: utilitarianism, hedonism, valence symmetry, unitarianism, use of proxies for hedonic potential, and more

I'm actually not sure how I'd think about the animal side of things on the capabilities approach. Presumably, factory farming looks pretty bad on that, so there are increasingly many animals with low/negative capability lives, so unclear how this works out on a global level.

4
ryancbriggs
6mo
Fair. I struggle with how to incorporate animals into the capabilities approach, and while I appreciate Martha Nussbaum turning her attention here I was also wary of list-based approaches so it doesn't help me too much.

This is a minor comment but you say

There’s compelling evidence that life has gotten better for humans recently

I don't think that is compelling evidence. Neither Pinker nor Karnosfky look at averages of self-reported happiness or life satisfaction, which would be the most relevant and comparable evidence, given your assumptions. According to the so-called Easterlin Paradox average subjective wellbeing has not been going up over the past few decades and won't with further economic growth. There have been years of debates over this (I confess I got ... (read more)

I strongly agree with your main point on uncertainty, and I'll defer to you on the (lack of) consensus among happiness researchers on the question of whether or not life is getting better for humans given their paradigm.

However, I think one can easily ground out the statement "There’s compelling evidence that life has gotten better for humans recently" in ways that do not involve subjective wellbeing and if one does so then the statement is quite defensible.

While I agree that net global welfare may be negative and declining, in light of the reasoning and evidence presented here, I think you could and should have claimed something like this: "net global welfare may be negative and declining, but it may also be positive and increasing, and really we have no idea which it is - any assessment of this type of is enormously speculative and uncertain".

As I read the post, the two expressions that popped into my head were "if it's worth doing, it's worth doing with made-up numbers" and "if you saw how the sausage is m... (read more)

This is a minor comment but you say

There’s compelling evidence that life has gotten better for humans recently

I don't think that is compelling evidence. Neither Pinker nor Karnosfky look at averages of self-reported happiness or life satisfaction, which would be the most relevant and comparable evidence, given your assumptions. According to the so-called Easterlin Paradox average subjective wellbeing has not been going up over the past few decades and won't with further economic growth. There have been years of debates over this (I confess I got ... (read more)

Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good - to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market, or maybe even just a signpost: once you've learnt what you needed, or found what you want and where to go, you don't necessarily stick around: maybe you need to 'go out' in the world to do what calls yo... (read more)

I suppose you could think of it as a manner of degree, right? Submitting feedback, doing interviews etc. are a good start, but involve people having less of a say than either 1. being part of the conversation or 2. having decision-making power, eg through a vote. People like to feel their concerns are heard - not just in EA, but in general - and when eg. a company says "please send in this feedback form" I'm not sure many people feel as heard as if someone (important) from that company listens to you live and publicly responds.

Thanks for this, which I read with interest! Can I see if I understood this correctly?

  1. You were interested in finding a way to assess the severity of pains in farmed animals so that you can compare severity to duration and determine the total badness. In jargon, you're after a cardinal measure of pain intensity.
  2. And your conclusion was a negative one, specifically that there was no clear way to assess the severity of pain. As you note, for humans, we have self-reports, but for non-human animals, we don't, so have to look for something else, such as how th
... (read more)
3
MichaelStJules
7mo
Even with humans, I wonder if self-reports of apparently cardinal pain intensities are just preferences over merely ordinal pain intensities, or otherwise imposing some cardinal structure that doesn’t actually exist in the pain itself. How could you tell?
3
William McAuliffe
7mo
I like your summary. I feel (slightly) less hopeless because I think...   * Comparisons that involve multiple dimensions of pain are, in principle, possible. I think I would only regard them as impossible if I came upon evidence that pain severity is, in reality, an ordinal construct.  * In one sense, I might be more pessimistic about this topic than many because I think it is plausible that many psychological constructs are ordinal. * Behavioral evidence could in theory license cardinal comparisons among different pains. Practical issues of feasibility (and permission from institutions) stand in the way, and I would grant that these will probably never be overcome. * Possibly, cardinal differences in severity are explicitly represented in the brain. If so, then in principle we could measure these representations, though I do not think that we ever will.   * We may be able to prioritize between relieving severe pain and long-lasting pain without making direct cardinal comparisons, so long as we have a sense of just how many orders of magnitude pain severity can span. Many aspects of pain experience appear conserved across a large number of species. If we find that pain in humans or laboratory animals have a wide range of pain severity, then there is an above-chance possibility that farmed animals do too. There is also an above-chance possibility that the most severe pains on factory farms are close to the end of the negative side of the range, given that it is difficult to see the adaptive value of being able to represent threats more extreme than, say, being boiled alive.   * I would agree that the point above is partly grounded in intuition that has only a vague relationship to a well-established theory of the evolution of pain. Hopefully, advances in this area will reduce our reliance on intuitions that are not grounded by a plausible scientific theory.    

Hey LondonGal, thank you for following up on this. I appreciate you clarifying your intentions about your post. Our team has read your comments and will take your feedback into consideration in our future work. I’ll hope you’ll forgive us for not responding in detail at this time. We are currently trying to focus on our current projects (and to avoid spending too much time on the EA forum, which we've done a lot of, particularly recently!). I expect that some (but probably not all) of the points you’ve raised in your original post will be addressed in some of our upcoming research. Thanks again for engaging with our work, and for sending the olive branch. It’s been received and we’ll look forward to future constructive interactions

Hello LondonGal (sorry, I don't know your real name). I'm glad that, after your recent scepticism, you looked further into subjective wellbeing data and think it can be useful. You've written a lot and I won't respond to it in detail. 

I think the most important points to make are (1) there is a lot more research that you suggest and (2) it didn't just start around COVID. 

You are right that, if you search for "subjective wellbeing", not much comes up (I get 706 results on PubMed). However, that's because the trend among researchers to refer to "su... (read more)

6
LondonGal
8mo
Hello, I just wanted to follow-up, as I’ve read your links as promised. I share a genetic trait of speed-reading, so this wasn’t too onerous – please forgive me for not watching a video of your EAG talk (this is inefficient for how I work) but you linked your write-up on the forum which I read instead. I feel you may be approaching this assuming I’m a bad actor, and I’m going to further endeavour to demonstrate this isn’t the case (I thought offering my meta-analysis skills for free, my arguments above about not over-interpreting an RCT, and this post in general would demonstrate that I’m approaching this in good faith). I care about this topic and I want EA to continue having interest in mental health/wellbeing – I think this could do a lot of good. I’m not sure why being non-EA would suggest I disagree with effective ways of doing altruistic work, or that I’m incapable of contributing anything to a discussion of mental health and wellbeing given this is literally my job, and so shouldn’t offer my perspective on a public-facing forum. I want to understand more about EA approaches to my field but I’m not so selfish to ask someone to do all the work for me (a stranger), hence putting a lot of effort into my post to show I am genuinely interested in trying to understand (side note: thanks again to people who have offered to chat off-forum - it’s really appreciated, and I’m sure my opinions will change in the coming weeks with your help). I’m wondering if this is coming across instead that I care about making another anti-HLI post – I don't. I tried approaching this problem (how does mental illness relate to wellbeing) from first principles to see what happened; I thought this was a sort of EA-oriented approach. I vaguely referenced the HLI post I commented on in my introductory comments to explain how this work came to be and interrogate my motivations (allowing others to do the same) - not linking the post/naming HLI, calling my contribution an 'overly technical d
8
LondonGal
8mo
Hi MichaelPlant, [Edit: Jk - I don't get the comment about my username/real name, I saw a mix being used on the forum, but I might have missed some etiquette - would you like my real name? Just 'hello' is fine if you'd prefer - no offence taken.] Thanks so much for taking the time to read and respond! I was hoping to get more insight from people within EA who might be able to fill me in on some of the more philosophical/economic aspects as I'm aware these aren't my areas of expertise (it was very much a 'paper' EA-hat I was trying on!) - I felt furthering my online searches wasn't as helpful as getting more insight into the underlying concepts from experts and hoped my post would at least show I was interested in hearing more. Thanks for the links as well - I did come across a few of them in my approach to this work, but will take your advice these are worth looking at again if you think I've not appraised them properly - you definitely know best in this regard! Also, apologies - you might be right in saying I didn't structure a paragraph very well if it has left anyone with the impression I was suggesting subjective wellbeing research has only been in existance since COVID. My own graph disproves this, for starters! I think it's this paragraph from the first section I've not phrased well (italics added). I was trying to emphasise the relatively steep growth in interest over the last few years due to questions about cost-effectiveness (e.g. WELLBY), which as you mention is 'barely older than COVID'. I don't actually think we disagree here so I'll need to think how to rephrase it to avoid conflating this with SWB research as a whole - to be clear, I don't think your reading of this was unfair and I can phrase it better. I'm not too sure I was ever arguing I was doing an exhaustive literature review (?) - I felt I stated a few times this was non-scientific, should have no weight, etc. My goal was just trying to get a quick overview as more of a sense-check, but di

Hello Linch. We're reluctant to recommend organisations that we haven't been able to vet ourselves but are planning to vet some new mental health and non-mental health organisations in time for Giving Season 2023. The details are in our Research Agenda. For mental health, we say

We expect to examine Friendship Bench, Sangath, and CorStone unless we find something more promising.


On how we chose StrongMinds, you've already found our selection process. Looking back at the document, I see that we don't get into the details, but it wasn't just procedural. W... (read more)

3
Linch
8mo
Thank you! I think if any of my non-EA friends ask about donating to mental health charities (which hasn't happened recently but is the type of thing my friends sometimes asks about in the past), I'd probably recommend to them to adopt a "wait and see" attitude.

This was really helpful, thanks! I'll discuss it with the team.

7
Cornelis Dirk Haupt
8mo
Meta-note as a casual lurker in this thread: This comment being down-voted to oblivion while Jason's comment is not, is pretty bizarre to me. The only explanation I can think of is that people who have provided criticism think Michael is saying they shouldn't criticise? It is blatantly obvious to me that this is not what he is saying and is simply agreeing with Jason that specific actionable-criticism is better. Fun meta-meta note I just realized after writing the above: This does mean I am potentially criticising some critics who are critical of how Micheal is criticising their criticism. Okkkk, that's enough internet for me. Peace and love, y'all.  

[I don’t plan make any (major) comments on this thread after today. It’s been time-and-energy intensive and I plan to move back to other priorities]

Hello Jason,

I really appreciated this comment: the analysis was thoughtful and the suggestions constructive. Indeed, it was a lightbulb moment.  I agree that some people do have us on epistemic probation, in the sense they think it’s inappropriate to grant the principle of charity, and should instead look for mistakes (and conclude incompetence or motivated reasoning if they find them).

I would disagree tha... (read more)

I think your last sentence is critical -- coming up with ways to improve epistemic practices and legibility is a lot easier where there are no budget constraints! It's hard for me to assess cost vs. benefit for suggestions, so the suggestions below should be taken with that in mind.

For any of HLI's donors who currently have it on epistemic probation: Getting out of epistemic probation generally requires additional marginal resources. Thus, it generally isn't a good idea to reduce funding based on probationary status. That would make about as much sense as ... (read more)

4
Rebecca
8mo
I could imagine that you get more people interested in providing funding if you pre-commit to doing things like bug bounties conditional on getting a certain amount of funding. Does this seem likely to you?

Hello Gregory. With apologies, I’m going to pre-commit both to making this my last reply to you on this post. This thread has been very costly in terms of my time and mental health, and your points below are, as far as I can tell, largely restatements of your earlier ones. As briefly as I can, and point by point again.

1. 

A casual reader looking at your original comment might mistakenly conclude that we only used StrongMinds own study, and no other data, for our evaluation. Our point was that SM’s own work has relatively little weight, and we rely on m... (read more)

Hello Jason. FWIW, I've drafted a reply to your other comment and I'm getting it checked internally before I post it.

On this comment about you not liking that we hadn't updated our website to include the new numbers: we all agree with you! It's a reasonable complaint. The explanation is fairly boring: we have been working on a new charity recommendations page for the website, at which point we were going to update the numbers at add a note, so we could do it all in one go. (We still plan to do a bigger reanalysis later this year.) However, that has gone sl... (read more)

4
Jason
9mo
Thanks, I appreciate that. (Looking back at the comment, I see the example actually ended up taking more space than the lead point! Although I definitely agree that the hot fix should happen, I hope the example didn't overshadow the comment's main intended point -- that people who have concerns about HLI's response to recent criticisms should raise their concerns with a degree of specificity, and explain why they have those concerns, to allow HLI an opportunity to address them.)

Hello Jack (again!),

This is because plausible person-affecting views will still find it important to improve the lives of future people who will necessarily exist.

I agree with this. But the challenge from the Non-Identity problem is that there are few, if any, necessarily existing future individuals: what we do causes different people to come into existence. This raises a challenge to longtermism: how can we make the future go better if we can't make it go better for anyone in particular? If an outcome is not better for anyone, how can it be better? In the... (read more)

2
Jack Malde
9mo
Hmm. Do you seriously think that philosophers have been too quick to dismiss such person-affecting views? If you accept that impacts on the future generally don't matter because you won't really be harming anyone, as they wouldn't have existed if you hadn't done the act, then you can justify doing some things that I'd imagine pretty much everyone would agree is wrong. For example, you could justify going around putting millions of landmines underground set to blow up in 200 years time causing immense misery to future people for no other reason than you want to cause their suffering. Provided those people will still live net positive lives overall, your logic says this isn't a bad thing to do. Do you really think it's OK to place the mines? Do you think anyone bar a psychopath thinks it's OK to place the mines? Of course, as you imply, there are other ways to respond to the non-identity problem. You could resort to an impersonal utilitarianism where you say no, don't place the mines because it will cause immense suffering and suffering is intrinsically bad. Do you really think this is a weaker response?

Hello Jack. A quick reply: I'm not sure how well the arguments for improving global being a sensible longterm priority will stack up. I suspect they won't, on closer inspection, but it seems worth investigating at some point.

Hello Matt and thanks for your overall vote of confidence, including your comments below to Nathan. 

Could you expand on what you said here?

I may also have been a little sus early (sorry Michael) on but HLI's work has been extremely valuable

I'm curious to know why you were originally suspicious and what changed your mind. Sorry if you've already stated that below. 

Hello Nathan. Thanks for the comment. I think the only key place where I would disagree with you is what you said here

If, as seems likely the forthcoming RCT downgrades SM a lot and the HLI team should have seen this coming, why didn't they act?

As I said in response to Greg (to which I see you've replied) we use the conventional scientific approach of relying on the sweep of existing data - rather than on our predictions of what future evidence (from a single study) will show. Indeed, I'm not sure how easily these would come apart: I would base my predicti... (read more)

2
Nathan Young
9mo
Yeah for what it's worth it wasn't clear to me until later that this was only like 10% of the weighting on your analysis.

Hello Richard. Glad to hear this! I've just sent you HLI's bank details, which should allow you to pay without card fees (I was inclined to share them directly here, but was worried that would be unwise). I don't have an answer to your second question, I'm afraid.

Hello Jack. I think people can and will have different conceptions of what the criteria to be on a/the 'top charity' list are, including what counts as sufficient strength of evidence. If strength of evidence is essential, that may well rule out any interventions focused on the longterm (whose effects we will never know) as well as deworming (the recommendation of which is substantially based on a single long-term study). The evidence relevant for StrongMinds was not trivial though: we drew on 39 studies of mental health interventions in LICs to calibrate ... (read more)

8
Jack Malde
9mo
Thanks Michael. My main concern is that it doesn't seem that there is enough clarity on the spillovers, and spillovers are likely to be a large component of the total impact. As Joel says there is a lack of data, and James Snowden's critique implies your current estimate is likely to be an overestimate for a number of reasons. Joel says in a comment "a high quality RCT would be very welcome for informing our views and settling our disagreements". This implies even Joel accepts that, given the current strength of evidence, there isn't clarity on spillovers. Therefore I would personally be more inclined to fund a study estimating spillovers than funding Strongminds. I find it disappointing that you essentially rule out suggesting funding research when it is at least plausible that this is the most effective way to improve happiness as it might enable better use of funds (it just wouldn't increase happiness immediately).

Hi Greg,

Thanks for this post, and for expressing your views on our work. Point by point:

  1. I agree that StrongMinds' own study had a surprisingly large effect size (1.72), which was why we never put much weight on it. Our assessment was based on a meta-analysis of psychotherapy studies in low-income countries, in line with academic best practice of looking at the wider sweep of evidence, rather than relying on a single study. You can see how, in table 2 below, reproduced from our analysis of StrongMinds, StrongMinds' own studies are given relatively little we
... (read more)

Hello Michael,

Thanks for your reply. In turn:

1: 

HLI has, in fact, put a lot of weight on the d = 1.72 Strongminds RCT. As table 2 shows, you give a weight of 13% to it - joint highest out of the 5 pieces of direct evidence. As there are ~45 studies in the meta-analytic results, this means this RCT is being given equal or (substantially) greater weight than any other study you include. For similar reasons, the Strongminds phase 2 trial is accorded the third highest weight out of all studies in the analysis.

HLI's analysis explains the rationale behind t... (read more)

Props on the clear and gracious reply. 

we think it's preferable to rely on the existing evidence to draw our conclusions, rather than on forecasts of as-yet unpublished work.

I sense this is wrong, if I think the unpublished work will change my conclusions a lot, I change my conclusions some of the way now though I understand that's a weird thing to do and hard to justify perhaps. Nonetheless I think it's the right move.

Hello Alex,

Reading back on the sentence, it would have been better to put 'many' rather than 'all'. I've updated it accordingly. TLYCS don't mention WELLBYs, but they did make the comment "we will continue to rely heavily on the research done by other terrific organizations in this space, such as GiveWell, Founders Pledge, Giving Green, Happier Lives Institute [...]".

It's worth restating the positives. A number of organisations have said that they've found our research useful. Notably, see the comments by Matt Lerner (Research Director, Founders Pledge) be... (read more)

7
alex lawsen (previously alexrjl)
9mo
My comment wasn't about whether there are any positives in using WELLBYs (I think there are), it was about whether I thought that sentence and set of links gave an accurate impression. It sounds like you agree that it didn't, given you've changed the wording and removed one of the links. Thanks for updating it. I think there's room to include a little more context around the quote from TLYCs.  

Hello James. Apologies, I've removed your name from the list. 

To explain why we included it, although the thrust of your post was to critically engage with our research, the paragraph was about the use of the SWB approach for evaluating impact, which I believed you were on board with. In this sense, I put you in the same category as GiveWell: not disagreeing about the general approach, but disagreeing about the numbers you get when you use it. 

Thanks for editing Michael. Fwiw I am broadly on board with swb being a useful framework to answer some questions. But I don’t think I’ve shifted my opinion on that much so “coming round to it” didn’t resonate

Thanks! Yes, that's right. 'Lean' is small team, 12 month budget. 'Growth' is growing the team, 12 month budget. 'Optimal growth' is just 'growth', but 18 month budget.

I'm now wondering if we should use different names...

The first two are good.

"Growth + more runway"? (plus a brief discussion of why you think adding +6 months runway would increase impact). Optimal could imply a better rate of growth, when the difference seems to be more stability.

Anyway, just donated -- although the odds of me moving away from GiveWell-style projects for my object-level giving is relatively modest, I think it's really important to have a good range of effective options for donors with various interests and philosophical positions.

I didn't expect people to agree with this comment, but I would be interested to know why they disagree! (Some people have commented below, but I don't imagine that covers all the actual reasons people had)

Hi Ben. It's a pity you didn't comment on the substance of my post, just proposed a minor correction. I hope you'll be able to comment later.

You point out EA Norway, which I was aware of, but I think it's the only one and decided not to mention it (I've even been to the annual conference and apologise to the Norwegians - credit where credit's due). But that seems to be the exception that proves the rule. Why are there no others? I've heard on the grapevine that CEA discourages it which seems, well, sinister. Seems a weird coincidence are nearly no democrat... (read more)

It's a pity you didn't comment on the substance of my post, just proposed a minor correction

Thanks for the nudge! Yeah I should have said that I agree with a lot of your comment. There are a few statements that are (IMO) hyperbolic, but if your comment was more moderate I suspect I would agree quite a lot.

I disagree though that this is a "minor correction" – people making (what the criticized person perceives as) uncharitable criticisms on the Forum seems like one of the major reasons why people don't want to engage here, and I would like there to be less of that.

You point out EA Norway, which I was aware of, but I think it's the only one and decided not to mention it (I've even been to the annual conference and apologise to the Norwegians - credit where credit's due). But that seems to be the exception that proves the rule. Why are there no others? I've heard on the grapevine that CEA discourages it which seems, well, sinister.

I think Efektivni Altruismus is similar (e.g. their bylaws state that members vote in the general assembly), and it has similarly been supported by a grant from CEA.

-8
Ben_West
9mo

Well, you're not going to fund stuff if you don't like what the organisation is planning to do. That's generally true.

I don't mind the idea of donors funding a members' society. This happens all the time, right? It's just the leaders have to justify it to the members. It's also not obvious that, if CEA were a democratic society, it would counterfactually lose funding. You might gain some and lose others. I'm not sure I would personally fund 'reformed-CEA' but I would be more willing to do so. 

I take it you're saying making things more democratic can make them more powerful because they then have greater legitimacy, right? More decentralised power -> large actual power?

I suppose part of my motivation to democratise CEA is that it sort of has that leadership role de facto anyway, and I don't see that changing anytime soon (because it's so central). Yet, it lacks legitimacy (i.e. the de jure bit), so a solution is to give it legitimacy.

I guess someone could say, "I don't want CEA to have more power, and it would have if it were a members societ... (read more)

Yeah, I've not spent loads of time trying to think through the details. I'm reluctant to do so unless there's interest from 'central EA' on this.

As ubuntu's comments elsewhere made clear, it's quite hard for someone to replicate various existing community structures, e.g. the conferences, even though no one has a literal monopoly on them, because they are still natural monopolies. If you're thinking "I can't imagine a funder supporting a new version of X if X already exists", then that's a good sign it is a central structure (and maybe should have democrat... (read more)

2
Jason
9mo
Yes, I think the proposal effectively highlights that EA is significantly more centralized than some claim.  My guess is that you would have to add a claim like "Funders should not fund 'central convening and coordinating' functions except as consistent with the community's will" to get anywhere with your proposal as currently sketched. That's a negative norm, less demanding than an affirmative claim to funding. But I haven't exhaustively explored the possibilities either. My own view is that a member-led organization is probably viable and a good idea, but has to be realistic about what functions it could assume.

[Written in a personal capacity, etc. This is the second of two comments, see the first here.]

In this comment, I consider how centralised EA should be. I’m less sure how to think about this. My main, tentative proposal is:

We should distinguish central functions from central control. The more central a function something has, the more decentralised control of it should be. Specifically, I suggest CEA should become a fee-paying members’ society that democratically elects its officers - much like the America Philosophical Association does. 

I suspect it h... (read more)

5
MichaelPlant
9mo
I didn't expect people to agree with this comment, but I would be interested to know why they disagree! (Some people have commented below, but I don't imagine that covers all the actual reasons people had)
4
Michael_PJ
9mo
Having read this I'm still unclear what the benefit of your restructuring of CEA is. It's not a decentralising move (if anything it seems like the opposite to me); it might be a legitimising move, but is lack of legitimacy an actual problem that we have? The main other difference I can see is that it might make CEA more populist in the sense of following the will of the members of the movement more. Maybe I'm as much of an instinctive technocrat as you are a democrat, but it seems far from clear to me that that would be good. Nor that it solves a problem we actually have.

I think it's a mistake to conflate making things more democratic or representative and making them more decentralised - historically the introduction of more representative institutions facilitated the centralisation of states by increasing their ability to tax cities (see e.g. here). In the same way I would expect making CEA/EVF more democratic would increase centralisation by increasing their perceived legitimacy and claim to leadership. 

I'm confused about the mathematics of a a fee-paying membership society. I'm having a hard time seeing how that would generate more than a modest fraction of current revenues.

It's not clear what the "central convening and coordinating parts" are. Neither Current-CEA nor Reformed-CEA would have a monopoly on tasks like funding community builders, funding/running conferences, and so on. They are just another vendor who the donors can choose to hire for those purposes. There is and would be no democratic mandate that donors who would like to fund X, Y, and Z ... (read more)

[anonymous]9mo10
3
3

I suggest CEA should become a fee-paying members’ society that democratically elects its officers - much like the America Philosophical Association does. 

Okay, but the American Philosophical Association "was founded in 1900 to promote the exchange of ideas among philosophers, to encourage creative and scholarly activity in philosophy, to facilitate the professional work and teaching of philosophers, and to represent philosophy as a discipline" with a modern mission as follows " promotes the discipline and profession of philosophy, both within the acad... (read more)

-7[anonymous]9mo

[Written in a personal capacity, etc. This is the first of two comments: second comment here]

Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I'll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how... (read more)

You say, in effect, "not that centralised", but, from your description, EA seems highly centralised

Your argument that it's not centralised seems to be that EA is not a single legal entity

These are two examples, but I generally didn't feel like your reply really engaged with Will's description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said "no, it is centralised!".

democracy has the effect of decentralising power.

I don't agree with this at all. IMO democracy often... (read more)

3
Ben_West
9mo
I think you mean something like "CEA's strategy should be determined by the vote of (some set of people)", which is a fine position to have, but there are clearly democratic elements in EA (democratically run organizations like EA Norway, individuals choosing to donate their money without deference to a coordinating body, etc.).

Yeah, I guess I mean genuinely new projects, rather than new tokens of the same type of project (eg group organisers are running the same thing in different places).

As MacAskill points out, it's pretty hard to run $1m+/yr project (or even less, tbh) without Open Philanthropy supporting it.

But, no, I'm not thinking about centralisation in terms of micro management, so I don't follow your comment. You can have centralised power without micromanagent.

-3
Ben_West
9mo
What does it mean to have centralized power without micromanagement? Like I could theoretically force a group organizer to use a different font, I just choose not to?

Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.

I wasn't sure about the 'do-ocracy' thing either. Of course, it's true that no one's stopping you from starting whatever project you want - I mean, EA concerns the activities of private citizens. But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.

But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project

I think there is a steelman of your argument which seems more plausible to me, but taken at face value this statement just seems clearly false?

E.g. there are >650 group organizers – how many of them do you think have met the people on that "senior EA's" list even once? I haven't even met everyone on the list, despite being on it!

When I think of highly centralized "conventional organizations" I think of Marissa Mayer at Google per... (read more)

Jack - or others who have run open board rounds recently - can you say more concretely what your advertisement and selection process was? In HLI, we're thinking about doing an open board round soon, but I'm not sure exactly how we'd do it.

For reference, for selection for staff, we standardly do an initial application, two interviews and two test tasks (1 about 2 hours, 1 about 5 hours). This doesn't seem like obviously the right process for board members: I'm hesitant to ask board members to do (long) test tasks, as they would be volunteers, plus I'm not s... (read more)

5
Jack Lewars
9mo
I think the application process is spelt out in the application pack, which is still live here: https://1fortheworld.org/jobs-at-oftw We did an initial screen on the application form, then 3 people reviewed the remaining candidates in more depth to form a longlist, and now 10 people are having three 30-min 'informal chats' with me and two Board members. Finally, our recommendations will go to the whole Board. In each round, we had three opinions and the other two were Board members. Grayden and the EA Good Governance Project will advise you to keep the ED out of the process, as they answer to the Board, but I found this was impractical as I have so much more time I can dedicate to moving the process on as part of my job. If you'd like details on questions asked etc., let me know

Thanks! Ah, in that case, I'm not sure what that community-building money was spent on - but I guess that's not the question you're asking!

2
Vaidehi Agarwalla
9mo
Yeah I considered doing some analysis on the top grantees but ran out of time! Perhaps a future post :)

Thanks very much for this. I see that OP's longterm pot is really big and I'm wonder if that's all community building, and also what counts as community building.

If you spend money on, say, AI safety research, I guess I'd see that as more like object-level funding than community building. Whereas 'pure' community building is the sort of thing the local groups do: bringing people together to meet, rather than funding specific work for the attendees to do. (If someone funded AMF and their team grew, I'm not sure people would say that's GH and W community bu... (read more)

2
Vaidehi Agarwalla
9mo
Hey Michael! It's a good question. I have not counted any object level work for any cause (e.g. I included funding for a co-working space, but not the AI safety labs that work out of the space). The OP LT team has not made any grants that fall into that category, as far as I could see. However, I did count most (?) longtermist field building projects in my estimates because a lot of them feel like importantly shaping the EA community since most of those people would participate in EA branded events and some would consider themselves post of the community etc. On the flip side, many university groups that are EA branded also end up promoting longtermist careers/causes more than others. So trying to separate this out is a bit messy. One thing I didn't include for time constraints is include a couple animal orgs like animal advocacy careers, but I expect the total funding for orgs like that to be <$2M total.

Or how about, rather than, "too rich" it's has "no room for more funding" (noRFMF)?

Yes, glad the strategy fortnight is happening. But this is fully 6 months post-FTX. And I think it's fair to say there's been a lack of communication. IME people don't mind waiting so much, so long as they have been told what's going to happen.

4
Ben_West
9mo
Yeah, I agree that some people were slow to communicate with the public (indeed, that was part of my motivation for organizing the strategy fortnight). I was just commenting that your use of the present tense seemed a little odd.

I see a couple of people have disagreed with this. Curiously to know why people disagree! Is this the wrong model of what crisis response looks like? Am I being too harsh about 2 and 3 not happening? Do I have the wrong model of what should happen to restore trust? Personally, I would love to feel that EA handled FTX really well.

9
Ben_West
9mo
I didn't disagree vote but did feel a bit like "getting people to share a vision for the future is kind of the whole point of EA Strategy Fortnight, no?" 

I agree with you that the loss of trust in leaders really stands out. I think it's worth asking why that happened and what could have been done better. Presumably people will differ on this, but here's roughly how I would expect a crisis to be managed well:

  1. Crisis emerges.
  2. Those in positions of authority quickly take the lead, say what needs to be changed, and communicate throughout
  3. Changes are enacted
  4. Problem is solved to some degree and everyone moves on.

What dented my trust was that I didn't and haven't observed 2 or 3 happening. When FTX blew up, va... (read more)

4
MichaelPlant
9mo
I see a couple of people have disagreed with this. Curiously to know why people disagree! Is this the wrong model of what crisis response looks like? Am I being too harsh about 2 and 3 not happening? Do I have the wrong model of what should happen to restore trust? Personally, I would love to feel that EA handled FTX really well.
3
NickLaing
9mo
Well articulated and I completely agree, love it.

Thanks for writing this up. I've often thought about EA in terms of waves (borrowing the idea from feminist theory) but never put fingers to keyboard. It'shard to do, because there is so much vagueness and so many currents and undercurrents happening. Some bits that seem missing:

You can identify waves within causes areas as well as between cause areas. Within 'future people', it seemed to go from X-risks to 'broad longtermism' (and I guess it's now going back to a focus on AI). Within animals, it started with factory-farmed land animals, and now seems to i... (read more)

2
ChanaMessinger
9mo
I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement - e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.

I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)

In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I'm ... (read more)

(Obviously we can't put things on 0-10 scales but) I just want to add that a 0.5/10 decrease should be considered a medium-big drop.

Or, senior AI researcher says that AI poses no risk because it's years away. This doesn't really make sense - what will happen in a few years? But he does seem smart and work for a prestigious tech company, so...

Yes, reflecting on this since posting, I have been wondering if there is some important distinction between the principle of charity applied to arguments in the abstract vs its application to the (understated) reasoning of individuals in some particular instance. Steelmanning seems good in the former case, because you're aiming to work your way to the truth. But steelmanning goes to far, and become mithrilmanning, in the latter case when you start assuming the individuals must have good reasons, even though you don't know what they are.

Perhaps mithrilmanning involves an implicit argument from authority ("this person is an authority. Therefore they must be right. Why might they be right?").

Linkpost from the HLI blog

Minor points

These respond to bits of the discussion in the order they happened.

1. On the meaning of SWB

Rob and Elie jump into discussing SWB without really defining it. Subjective wellbeing is an umbrella term that refers to self-assessments of life. It’s often broken down into various elements, each of which can be measured separately. There are (1) experiential measures, how you feel during your life – this is closest to ‘happiness’ in the ordinary use of the word; (2) evaluative measure are an assess of life a a whole; the... (read more)

Load more