Hello Bob and team. Looking forward to reading this. To check, are you planning to say anything explicitly about your approach to moral uncertainty? I can't see anything directly mentioned in 5., which is where I guessed it would go.
On that note Bob, you might recall that, a while back I mentioned to you some work I'm doing with a couple of other philosophers on developing an approach to moral uncertainty along these lines that will sometimes justify the practice of worldview diversification. That draft is nearly complete and your post series inspire...
This report seems commendably thorough and thoughtful. Could you possibly spell out its implications for effect altruists though? I take it the conclusion is that humanity was most violent in the subsistence farming period, rather than before or after, but I'm not sure what to make of that. Presumably, it shows how violent people are changes quite radically in different contexts, so should I be reassured if, as seems likely, modern-type societies will continue? Returns to hunter-gathering and subsistence farming do not seem on the cards.
Sorry if I've missed something. But I reckoned that, if it was obvious to me, some others would missed it too.
I think it's mainly relevant for the part of EA that is interested in long-run history and its implications for the long-term. There's also some stuff in the discussion about how the drivers of conflict are a small subset of sociopaths/malevolent people; humanity is not innately violent even though we have high rates of violence relative to other primates. I think is relevant for the future of violence.
Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]
First, the piece you're referring to is a book review in an aca...
Yup, I'd be inclined to agree it's easier to ground the idea life is getting better for humans on objective measures. The is author's comparison is made in terms of happiness though:
This work draws heavily on the Moral Weight Project from Rethink Priorities and relies on the same assumptions: utilitarianism, hedonism, valence symmetry, unitarianism, use of proxies for hedonic potential, and more
I'm actually not sure how I'd think about the animal side of things on the capabilities approach. Presumably, factory farming looks pretty bad on that, so there are increasingly many animals with low/negative capability lives, so unclear how this works out on a global level.
This is a minor comment but you say
There’s compelling evidence that life has gotten better for humans recently
I don't think that is compelling evidence. Neither Pinker nor Karnosfky look at averages of self-reported happiness or life satisfaction, which would be the most relevant and comparable evidence, given your assumptions. According to the so-called Easterlin Paradox average subjective wellbeing has not been going up over the past few decades and won't with further economic growth. There have been years of debates over this (I confess I got ...
I strongly agree with your main point on uncertainty, and I'll defer to you on the (lack of) consensus among happiness researchers on the question of whether or not life is getting better for humans given their paradigm.
However, I think one can easily ground out the statement "There’s compelling evidence that life has gotten better for humans recently" in ways that do not involve subjective wellbeing and if one does so then the statement is quite defensible.
While I agree that net global welfare may be negative and declining, in light of the reasoning and evidence presented here, I think you could and should have claimed something like this: "net global welfare may be negative and declining, but it may also be positive and increasing, and really we have no idea which it is - any assessment of this type of is enormously speculative and uncertain".
As I read the post, the two expressions that popped into my head were "if it's worth doing, it's worth doing with made-up numbers" and "if you saw how the sausage is m...
This is a minor comment but you say
There’s compelling evidence that life has gotten better for humans recently
I don't think that is compelling evidence. Neither Pinker nor Karnosfky look at averages of self-reported happiness or life satisfaction, which would be the most relevant and comparable evidence, given your assumptions. According to the so-called Easterlin Paradox average subjective wellbeing has not been going up over the past few decades and won't with further economic growth. There have been years of debates over this (I confess I got ...
Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good - to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market, or maybe even just a signpost: once you've learnt what you needed, or found what you want and where to go, you don't necessarily stick around: maybe you need to 'go out' in the world to do what calls yo...
I suppose you could think of it as a manner of degree, right? Submitting feedback, doing interviews etc. are a good start, but involve people having less of a say than either 1. being part of the conversation or 2. having decision-making power, eg through a vote. People like to feel their concerns are heard - not just in EA, but in general - and when eg. a company says "please send in this feedback form" I'm not sure many people feel as heard as if someone (important) from that company listens to you live and publicly responds.
Thanks for this, which I read with interest! Can I see if I understood this correctly?
Hey LondonGal, thank you for following up on this. I appreciate you clarifying your intentions about your post. Our team has read your comments and will take your feedback into consideration in our future work. I’ll hope you’ll forgive us for not responding in detail at this time. We are currently trying to focus on our current projects (and to avoid spending too much time on the EA forum, which we've done a lot of, particularly recently!). I expect that some (but probably not all) of the points you’ve raised in your original post will be addressed in some of our upcoming research. Thanks again for engaging with our work, and for sending the olive branch. It’s been received and we’ll look forward to future constructive interactions
Hello LondonGal (sorry, I don't know your real name). I'm glad that, after your recent scepticism, you looked further into subjective wellbeing data and think it can be useful. You've written a lot and I won't respond to it in detail.
I think the most important points to make are (1) there is a lot more research that you suggest and (2) it didn't just start around COVID.
You are right that, if you search for "subjective wellbeing", not much comes up (I get 706 results on PubMed). However, that's because the trend among researchers to refer to "su...
Hello Linch. We're reluctant to recommend organisations that we haven't been able to vet ourselves but are planning to vet some new mental health and non-mental health organisations in time for Giving Season 2023. The details are in our Research Agenda. For mental health, we say
We expect to examine Friendship Bench, Sangath, and CorStone unless we find something more promising.
On how we chose StrongMinds, you've already found our selection process. Looking back at the document, I see that we don't get into the details, but it wasn't just procedural. W...
[I don’t plan make any (major) comments on this thread after today. It’s been time-and-energy intensive and I plan to move back to other priorities]
Hello Jason,
I really appreciated this comment: the analysis was thoughtful and the suggestions constructive. Indeed, it was a lightbulb moment. I agree that some people do have us on epistemic probation, in the sense they think it’s inappropriate to grant the principle of charity, and should instead look for mistakes (and conclude incompetence or motivated reasoning if they find them).
I would disagree tha...
I think your last sentence is critical -- coming up with ways to improve epistemic practices and legibility is a lot easier where there are no budget constraints! It's hard for me to assess cost vs. benefit for suggestions, so the suggestions below should be taken with that in mind.
For any of HLI's donors who currently have it on epistemic probation: Getting out of epistemic probation generally requires additional marginal resources. Thus, it generally isn't a good idea to reduce funding based on probationary status. That would make about as much sense as ...
Hello Gregory. With apologies, I’m going to pre-commit both to making this my last reply to you on this post. This thread has been very costly in terms of my time and mental health, and your points below are, as far as I can tell, largely restatements of your earlier ones. As briefly as I can, and point by point again.
1.
A casual reader looking at your original comment might mistakenly conclude that we only used StrongMinds own study, and no other data, for our evaluation. Our point was that SM’s own work has relatively little weight, and we rely on m...
Hello Jason. FWIW, I've drafted a reply to your other comment and I'm getting it checked internally before I post it.
On this comment about you not liking that we hadn't updated our website to include the new numbers: we all agree with you! It's a reasonable complaint. The explanation is fairly boring: we have been working on a new charity recommendations page for the website, at which point we were going to update the numbers at add a note, so we could do it all in one go. (We still plan to do a bigger reanalysis later this year.) However, that has gone sl...
Hello Jack (again!),
This is because plausible person-affecting views will still find it important to improve the lives of future people who will necessarily exist.
I agree with this. But the challenge from the Non-Identity problem is that there are few, if any, necessarily existing future individuals: what we do causes different people to come into existence. This raises a challenge to longtermism: how can we make the future go better if we can't make it go better for anyone in particular? If an outcome is not better for anyone, how can it be better? In the...
Hello Jack. A quick reply: I'm not sure how well the arguments for improving global being a sensible longterm priority will stack up. I suspect they won't, on closer inspection, but it seems worth investigating at some point.
Hello Matt and thanks for your overall vote of confidence, including your comments below to Nathan.
Could you expand on what you said here?
I may also have been a little sus early (sorry Michael) on but HLI's work has been extremely valuable
I'm curious to know why you were originally suspicious and what changed your mind. Sorry if you've already stated that below.
Hello Nathan. Thanks for the comment. I think the only key place where I would disagree with you is what you said here
If, as seems likely the forthcoming RCT downgrades SM a lot and the HLI team should have seen this coming, why didn't they act?
As I said in response to Greg (to which I see you've replied) we use the conventional scientific approach of relying on the sweep of existing data - rather than on our predictions of what future evidence (from a single study) will show. Indeed, I'm not sure how easily these would come apart: I would base my predicti...
Hello Richard. Glad to hear this! I've just sent you HLI's bank details, which should allow you to pay without card fees (I was inclined to share them directly here, but was worried that would be unwise). I don't have an answer to your second question, I'm afraid.
Hello Jack. I think people can and will have different conceptions of what the criteria to be on a/the 'top charity' list are, including what counts as sufficient strength of evidence. If strength of evidence is essential, that may well rule out any interventions focused on the longterm (whose effects we will never know) as well as deworming (the recommendation of which is substantially based on a single long-term study). The evidence relevant for StrongMinds was not trivial though: we drew on 39 studies of mental health interventions in LICs to calibrate ...
Hi Greg,
Thanks for this post, and for expressing your views on our work. Point by point:
Hello Michael,
Thanks for your reply. In turn:
1:
HLI has, in fact, put a lot of weight on the d = 1.72 Strongminds RCT. As table 2 shows, you give a weight of 13% to it - joint highest out of the 5 pieces of direct evidence. As there are ~45 studies in the meta-analytic results, this means this RCT is being given equal or (substantially) greater weight than any other study you include. For similar reasons, the Strongminds phase 2 trial is accorded the third highest weight out of all studies in the analysis.
HLI's analysis explains the rationale behind t...
Props on the clear and gracious reply.
we think it's preferable to rely on the existing evidence to draw our conclusions, rather than on forecasts of as-yet unpublished work.
I sense this is wrong, if I think the unpublished work will change my conclusions a lot, I change my conclusions some of the way now though I understand that's a weird thing to do and hard to justify perhaps. Nonetheless I think it's the right move.
Hello Alex,
Reading back on the sentence, it would have been better to put 'many' rather than 'all'. I've updated it accordingly. TLYCS don't mention WELLBYs, but they did make the comment "we will continue to rely heavily on the research done by other terrific organizations in this space, such as GiveWell, Founders Pledge, Giving Green, Happier Lives Institute [...]".
It's worth restating the positives. A number of organisations have said that they've found our research useful. Notably, see the comments by Matt Lerner (Research Director, Founders Pledge) be...
Hello James. Apologies, I've removed your name from the list.
To explain why we included it, although the thrust of your post was to critically engage with our research, the paragraph was about the use of the SWB approach for evaluating impact, which I believed you were on board with. In this sense, I put you in the same category as GiveWell: not disagreeing about the general approach, but disagreeing about the numbers you get when you use it.
Thanks for editing Michael. Fwiw I am broadly on board with swb being a useful framework to answer some questions. But I don’t think I’ve shifted my opinion on that much so “coming round to it” didn’t resonate
Thanks! Yes, that's right. 'Lean' is small team, 12 month budget. 'Growth' is growing the team, 12 month budget. 'Optimal growth' is just 'growth', but 18 month budget.
I'm now wondering if we should use different names...
The first two are good.
"Growth + more runway"? (plus a brief discussion of why you think adding +6 months runway would increase impact). Optimal could imply a better rate of growth, when the difference seems to be more stability.
Anyway, just donated -- although the odds of me moving away from GiveWell-style projects for my object-level giving is relatively modest, I think it's really important to have a good range of effective options for donors with various interests and philosophical positions.
I didn't expect people to agree with this comment, but I would be interested to know why they disagree! (Some people have commented below, but I don't imagine that covers all the actual reasons people had)
Hi Ben. It's a pity you didn't comment on the substance of my post, just proposed a minor correction. I hope you'll be able to comment later.
You point out EA Norway, which I was aware of, but I think it's the only one and decided not to mention it (I've even been to the annual conference and apologise to the Norwegians - credit where credit's due). But that seems to be the exception that proves the rule. Why are there no others? I've heard on the grapevine that CEA discourages it which seems, well, sinister. Seems a weird coincidence are nearly no democrat...
It's a pity you didn't comment on the substance of my post, just proposed a minor correction
Thanks for the nudge! Yeah I should have said that I agree with a lot of your comment. There are a few statements that are (IMO) hyperbolic, but if your comment was more moderate I suspect I would agree quite a lot.
I disagree though that this is a "minor correction" – people making (what the criticized person perceives as) uncharitable criticisms on the Forum seems like one of the major reasons why people don't want to engage here, and I would like there to be less of that.
You point out EA Norway, which I was aware of, but I think it's the only one and decided not to mention it (I've even been to the annual conference and apologise to the Norwegians - credit where credit's due). But that seems to be the exception that proves the rule. Why are there no others? I've heard on the grapevine that CEA discourages it which seems, well, sinister.
I think Efektivni Altruismus is similar (e.g. their bylaws state that members vote in the general assembly), and it has similarly been supported by a grant from CEA.
Well, you're not going to fund stuff if you don't like what the organisation is planning to do. That's generally true.
I don't mind the idea of donors funding a members' society. This happens all the time, right? It's just the leaders have to justify it to the members. It's also not obvious that, if CEA were a democratic society, it would counterfactually lose funding. You might gain some and lose others. I'm not sure I would personally fund 'reformed-CEA' but I would be more willing to do so.
I take it you're saying making things more democratic can make them more powerful because they then have greater legitimacy, right? More decentralised power -> large actual power?
I suppose part of my motivation to democratise CEA is that it sort of has that leadership role de facto anyway, and I don't see that changing anytime soon (because it's so central). Yet, it lacks legitimacy (i.e. the de jure bit), so a solution is to give it legitimacy.
I guess someone could say, "I don't want CEA to have more power, and it would have if it were a members societ...
Yeah, I've not spent loads of time trying to think through the details. I'm reluctant to do so unless there's interest from 'central EA' on this.
As ubuntu's comments elsewhere made clear, it's quite hard for someone to replicate various existing community structures, e.g. the conferences, even though no one has a literal monopoly on them, because they are still natural monopolies. If you're thinking "I can't imagine a funder supporting a new version of X if X already exists", then that's a good sign it is a central structure (and maybe should have democrat...
[Written in a personal capacity, etc. This is the second of two comments, see the first here.]
In this comment, I consider how centralised EA should be. I’m less sure how to think about this. My main, tentative proposal is:
We should distinguish central functions from central control. The more central a function something has, the more decentralised control of it should be. Specifically, I suggest CEA should become a fee-paying members’ society that democratically elects its officers - much like the America Philosophical Association does.
I suspect it h...
I think it's a mistake to conflate making things more democratic or representative and making them more decentralised - historically the introduction of more representative institutions facilitated the centralisation of states by increasing their ability to tax cities (see e.g. here). In the same way I would expect making CEA/EVF more democratic would increase centralisation by increasing their perceived legitimacy and claim to leadership.
I'm confused about the mathematics of a a fee-paying membership society. I'm having a hard time seeing how that would generate more than a modest fraction of current revenues.
It's not clear what the "central convening and coordinating parts" are. Neither Current-CEA nor Reformed-CEA would have a monopoly on tasks like funding community builders, funding/running conferences, and so on. They are just another vendor who the donors can choose to hire for those purposes. There is and would be no democratic mandate that donors who would like to fund X, Y, and Z ...
I suggest CEA should become a fee-paying members’ society that democratically elects its officers - much like the America Philosophical Association does.
Okay, but the American Philosophical Association "was founded in 1900 to promote the exchange of ideas among philosophers, to encourage creative and scholarly activity in philosophy, to facilitate the professional work and teaching of philosophers, and to represent philosophy as a discipline" with a modern mission as follows " promotes the discipline and profession of philosophy, both within the acad...
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I'll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how...
You say, in effect, "not that centralised", but, from your description, EA seems highly centralised
Your argument that it's not centralised seems to be that EA is not a single legal entity
These are two examples, but I generally didn't feel like your reply really engaged with Will's description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said "no, it is centralised!".
democracy has the effect of decentralising power.
I don't agree with this at all. IMO democracy often...
Yeah, I guess I mean genuinely new projects, rather than new tokens of the same type of project (eg group organisers are running the same thing in different places).
As MacAskill points out, it's pretty hard to run $1m+/yr project (or even less, tbh) without Open Philanthropy supporting it.
But, no, I'm not thinking about centralisation in terms of micro management, so I don't follow your comment. You can have centralised power without micromanagent.
Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.
I wasn't sure about the 'do-ocracy' thing either. Of course, it's true that no one's stopping you from starting whatever project you want - I mean, EA concerns the activities of private citizens. But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.
But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project
I think there is a steelman of your argument which seems more plausible to me, but taken at face value this statement just seems clearly false?
E.g. there are >650 group organizers – how many of them do you think have met the people on that "senior EA's" list even once? I haven't even met everyone on the list, despite being on it!
When I think of highly centralized "conventional organizations" I think of Marissa Mayer at Google per...
Jack - or others who have run open board rounds recently - can you say more concretely what your advertisement and selection process was? In HLI, we're thinking about doing an open board round soon, but I'm not sure exactly how we'd do it.
For reference, for selection for staff, we standardly do an initial application, two interviews and two test tasks (1 about 2 hours, 1 about 5 hours). This doesn't seem like obviously the right process for board members: I'm hesitant to ask board members to do (long) test tasks, as they would be volunteers, plus I'm not s...
Thanks! Ah, in that case, I'm not sure what that community-building money was spent on - but I guess that's not the question you're asking!
Thanks very much for this. I see that OP's longterm pot is really big and I'm wonder if that's all community building, and also what counts as community building.
If you spend money on, say, AI safety research, I guess I'd see that as more like object-level funding than community building. Whereas 'pure' community building is the sort of thing the local groups do: bringing people together to meet, rather than funding specific work for the attendees to do. (If someone funded AMF and their team grew, I'm not sure people would say that's GH and W community bu...
Yes, glad the strategy fortnight is happening. But this is fully 6 months post-FTX. And I think it's fair to say there's been a lack of communication. IME people don't mind waiting so much, so long as they have been told what's going to happen.
I see a couple of people have disagreed with this. Curiously to know why people disagree! Is this the wrong model of what crisis response looks like? Am I being too harsh about 2 and 3 not happening? Do I have the wrong model of what should happen to restore trust? Personally, I would love to feel that EA handled FTX really well.
I agree with you that the loss of trust in leaders really stands out. I think it's worth asking why that happened and what could have been done better. Presumably people will differ on this, but here's roughly how I would expect a crisis to be managed well:
What dented my trust was that I didn't and haven't observed 2 or 3 happening. When FTX blew up, va...
Thanks for writing this up. I've often thought about EA in terms of waves (borrowing the idea from feminist theory) but never put fingers to keyboard. It'shard to do, because there is so much vagueness and so many currents and undercurrents happening. Some bits that seem missing:
You can identify waves within causes areas as well as between cause areas. Within 'future people', it seemed to go from X-risks to 'broad longtermism' (and I guess it's now going back to a focus on AI). Within animals, it started with factory-farmed land animals, and now seems to i...
I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)
In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I'm ...
(Obviously we can't put things on 0-10 scales but) I just want to add that a 0.5/10 decrease should be considered a medium-big drop.
Or, senior AI researcher says that AI poses no risk because it's years away. This doesn't really make sense - what will happen in a few years? But he does seem smart and work for a prestigious tech company, so...
Yes, reflecting on this since posting, I have been wondering if there is some important distinction between the principle of charity applied to arguments in the abstract vs its application to the (understated) reasoning of individuals in some particular instance. Steelmanning seems good in the former case, because you're aiming to work your way to the truth. But steelmanning goes to far, and become mithrilmanning, in the latter case when you start assuming the individuals must have good reasons, even though you don't know what they are.
Perhaps mithrilmanning involves an implicit argument from authority ("this person is an authority. Therefore they must be right. Why might they be right?").
Minor points
These respond to bits of the discussion in the order they happened.
1. On the meaning of SWB
Rob and Elie jump into discussing SWB without really defining it. Subjective wellbeing is an umbrella term that refers to self-assessments of life. It’s often broken down into various elements, each of which can be measured separately. There are (1) experiential measures, how you feel during your life – this is closest to ‘happiness’ in the ordinary use of the word; (2) evaluative measure are an assess of life a a whole; the...
Thanks for this. I think this is very valuable and really appreciate this being set out. I expect to come back to it a few times. One query and one request from further work - from someone, not necessarily you, as this is already a sterling effort!
I've heard Thorstad's TOP talk a couple of times, but it's now a bit foggy and I can't remember where his ends and yours starts. Is it that Thorstad argues (some version of) longtermism relies on the TOP thesis, but doesn't investigate whether TOP is true, whereas you set about investigating if it is true?
T