Luke Kemp and I just published a paper which criticises existential risk for lacking a rigorous and safe methodology: 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225

It could be a promising sign for epistemic health that the critiques of leading voices come from early career researchers within the community. Unfortunately, the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset.

We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.

What you see here is version 28. We have had approximately 20 + reviewers, around half of which we sought out as scholars who would be sceptical of our arguments. We believe it is time to accept that many people will disagree with several points we make, regardless of how these are phrased or nuanced. We hope you will voice your disagreement based on the arguments, not the perceived tone of this paper.

We always saw this paper as a reference point and platform to encourage greater diversity, debate, and innovation. However, the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views. Making the case for democracy was heavily contested, despite reams of supporting empirical and theoretical evidence. In contrast, the idea of differential technological development, or the NTI framework, have been wholesale adopted despite almost no underpinning peer-review research. I wonder how much of the ideas we critique here would have seen the light of day, if the same suspicious scrutiny was applied to more orthodox views and their authors.

We wrote this critique to help progress the field. We do not hate longtermism, utilitarianism or transhumanism,. In fact, we personally agree with some facets of each. But our personal views should barely matter. We ask of you what we have assumed to be true for all the authors that we cite in this paper: that the author is not equivalent to the arguments they present, that arguments will change, and that it doesn’t matter who said it, but instead that it was said.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

Many EAs we showed this paper to exemplified the ideal. They assessed the paper’s merits on the basis of its arguments rather than group membership, engaged in dialogue, disagreed respectfully, and improved our arguments with care and attention. We thank them for their support and meeting the challenge of reasoning in the midst of emotional discomfort. By others we were accused of lacking academic rigour and harbouring bad intentions. 

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant.  It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA. 

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

The greatest predictor of how negatively a reviewer would react to the paper was their personal identification with EA. Writing a critical piece should not incur negative consequences on one’s career options, personal life, and social connections in a community that is supposedly great at inviting and accepting criticism.

Many EAs have privately thanked us for "standing in the firing line" because they found the paper valuable to read but would not dare to write it. Some tell us they have independently thought of and agreed with our arguments but would like us not to repeat their name in connection with them. This is not a good sign for any community, never mind one with such a focus on epistemics. If you believe EA is epistemically healthy, you must ask yourself why your fellow members are unwilling to express criticism publicly. We too considered publishing this anonymously. Ultimately, we decided to support a vision of a curious community in which authors should not have to fear their name being associated with a piece that disagrees with current orthodoxy. It is a risk worth taking for all of us. 

The state of EA is what it is due to structural reasons and norms (see this article). Design choices have made it so, and they can be reversed and amended. EA fails not because the individuals in it are not well intentioned, good intentions just only get you so far.

EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes. I now believe EA needs to make such structural adjustments in order to stay on the right side of history. 

203

305 comments, sorted by Click to highlight new comments since: Today at 2:04 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

I just want to say that I think this is a beautifully accepting response to criticism. Not defensive. Says hey yes maybe there is a problem here. Concretely offers time and money and a plan to look into things more. Really lovely, thank you Will. 

Thanks for stating this publically  here Will! 

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?

Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I bel... (read more)

I would agree that the article is too wide-ranging. There's a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics. 

3evelynciara5mo
Where? The paper doesn't mention economic growth at all.

The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate  abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.

There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as ... (read more)

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential."

Point taken. Thank you for pointing this out.

"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

I think this is more about stopping the development of specific technologies - for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn't necessarily related to degrowth - for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.

1anonymousEA5mo
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.

Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.

2anonymousEA5mo
It is morally tenable under some moral codes but not others. That's the point.

Several times the case against the TUA was not actually argued


I think that they didn't try to oppose the TUA in the paper, or make the argument against it themselves. To quote: "We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field."

I also think that they don't need to prove that others are wrong to show that the lack of diversity has harms - as you agreed.

If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. 

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.

The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to c

... (read more)

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.


If not the funders,  do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I'm not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.

To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.

-2Guy Raveh5mo
I can't speak for David, but personally I think it's important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn't call to intentionally harm or discriminate against another, it's important that we don't condition funding on agreement with the funders' views. So, I completely disagree with this.

Freedom of speach and freedom of research are important, and as long as someone doesn't call to intentionally harm or discriminate against another, it's important that we don't condition funding on agreement with the funders' views.

This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?

I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work  in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.

7Guy Raveh5mo
When you say "surely", what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I'm not sure. And now suppose it's hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder - is it now really only that initial assumption (e.g. animals deserve moral consideration) that's the condition for funding, or is it now a measure of how much the research conforms with the funder's specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).

>it's important that we don't condition funding on agreement with the funders' views.

Surely we can condition funding on the quality of the researcher's past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn't guarantee a sinecure either. 

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

I theoretically agree, but I think it's hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.

For example, I don't think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.

I think this might just be unavoidably hard.

Like, it seems clear that funders shouldn't fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).

I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.

The paper points out, among many other things, that more diversity in funders would  help accomplish most of these goals.

I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others' analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.

That said, I don't think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you're biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.

Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you're basically in the same position you were with only one funder. If they're too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.

5Davidmanheim5mo
I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it's a bit unreasonable to criticize the paper's concrete suggestion by saying that it's a hard problem, and their ideas would help, but they wouldn't be a panacea - clearly, if "fixes everything" is the bar for concrete ideas, we should all go home. On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that "too unaligned" is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I'd actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.
5Will Bradshaw5mo
Firstly, I wasn't responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn't go away even in a world with more diverse funding. You brought up "diversify funding" as a solution to that problem, and I responded that it is helpful but insufficient. I didn't say anything critical of the OP's proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don't understand your accusation of unreasonableness here at all. Secondly, "have more diversity in funders" is not remotely a concrete proposal. It's a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is "as concrete as [you] can imagine" then we are operating under different definitions of "concrete".
2Davidmanheim5mo
I don't really want the discussion to focus entirely on the meta-level, but the conversation went something like "we can condition funding on the quality of the researcher's past work " -> "I think it's hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors." -> "more diversity in funders would help" (which was the original claim in the post!) -> "I don't think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other." So I pointed out that more diversity, which was the post's suggestion, that I was referring back to, was as concrete a solution to the general issue of " it's hard to separate judgements about research quality from disagreement with its conclusions" as I can imagine. But I don't think we're using different definitions at all. At this point, it seems clear you wanted something more concrete ("have Openphil split it's budget in the following way,") but it wouldn't have solved the general problem which was being discussed. Which was why I said I can't imagine a more concrete solution to the problem you were discussing. In any case, I'm much more interested in the object level discussion of what would help, or not, and why.

I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be "responsible for ensuring harmful and wrong ideas are not widely circulated" through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.

A couple of commenters here have edged closer to this strong view than I'm comfortable with, and I'm happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.

That said, I do agree that "consistently making bad arguments should eventually lead to the withdrawal of funding", and that this problem is hard (see my other reply to Guy below).

9jsteinhardt5mo
I also agree with you. I would find it very problematic if anyone was trying to "ensure harmful and wrong ideas are not widely circulated". Ideas should be argued against, not suppressed.

Ideas should be argued against, not suppressed.


All ideas? Instructions for how to make contact poisons that aren't traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals' command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed. 

You  can tell me that making information more available is good, and I agree in almost all cases. But only almost all.

4jsteinhardt5mo
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like "instructions" than "arguments", and Rubi was calling for suppressing arguments on the danger that they would be believed.
2Davidmanheim5mo
The claim was a general one - I certainly don't think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong. The original question was: "If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?" And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
7Guy Raveh5mo
I mostly agree with your comments, but I think we need to stop referring to specific people as leaders of the movement. Will MacAskill's opinion is not really more important than anyone else's.
9Davidmanheim5mo
I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that's good, since he's spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)
5CarlaZoeC5mo
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued. If that was the case then most of the key texts in x-risk would all be poorly argued. Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. As David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it's hard to find how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently. Those two criticisms are in tension. If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research). You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again, your points are in tension. I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and

Hi Carla,

Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.

First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.

You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values".  Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemic... (read more)

-53anonymousEA5mo

Here's a Q&A which answers some of the questions by  reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)

"Do you not think we should work on x-risk?"

  • Of course we should work on x-risk

 

"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"

  • No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas. 
  • But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.

 

"Do you hate longtermism?"

  • No. We are both longtermists (probs just not the techno utopian kind).

 

"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"

  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. 
  • There’s some hedging i
... (read more)

Just noting that I strongly endorse both this format for responding to questions, and the specific responses.

6Raven5mo
With regard to harshness, I think part of the reason you get different responses is because you're writing in the genre of the academic paper. Since authors have to write in a particular formal style, it's ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it's not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe. For example: As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it's easy to read into it some amount of value judgment around longtermism and longtermists.

I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.

In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.

Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!


You claim that EA needs to...

diversify funding sources by breaking up big funding bodies 

Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in th... (read more)

While I appreciate that we're all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.

Especially given the claim that "EA needs to make such structural adjustments in order to stay on the right side of history".

The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true. 

Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:

"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on... (read more)

That was my reading of VWH too - as a pro tanto argument for extreme surveillance and centralized global governance, provided that the VWH is true. However, many of its proponents seem to believe that the VWH is likely to be true. I do agree that the authors ought to have interpreted the paper more carefully, though.

1CarlaZoeC5mo
* It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. * There’s some hedging in the article but… * He published in a policy journal, with an opening ‘policy implication’ box * He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.” * In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment. * The VWH was also published as a German book (why I don’t know…)

It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?

Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead's (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1] 

I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.

In my opinion, it's insuffici... (read more)

The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.

Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions. 

3anonea20215mo
I think it's important to distinguish "anarcho"-capitalist thought (which still needs a state to enforce private property and capital rights and generally doesn't acknowledge the problems of monopolies, existing power imbalances etc.) and actual anarchist/anti-totalitarian policies. All the things you mentioned except the last decentralise control from a democratic state to a moneyed elite, not to a more democratic state, confederation, anarchist commune or whatever.

I really dislike it when left-anarchist-leaning folks put scare quotes around "anarcho" in anarcho-capitalist. In my experience it's  a strong indicator that someone isn't arguing in good faith.

I'm not an ancap (or a left-anarchist), but David Friedman and his ilk is very clearly trying to formulate a way for a capitalist society to exist without a state. You might think their plans are unfeasible or undesirable (and I do), but that doesn't mean they're secretly not anarchists.

2anonea20215mo
From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies. Using that label is like calling yourself "vegano-carnivore" because you want to reduce the suffering of eating animals as much as possible while still eating them. Even if you can come up with a justification on it by presenting clearly realizable ways to implement this (e.g. lab grown meat), it is adopting a label from a community that does not want them to do so. Indeed, there was already a ready-made label "laisez-faire", but that one has sufficiently negative historical associations that I guess it is to be avoided. Regarding Friedman, I would challenge the statement that he provides ways to organize it without a state, given that he romantizices medieval iceland and the western frontier and I am highly skeptical that the law enforcement/military aspect required for enforcing capital right would not lead to the tyranny of the robber barons in their company towns again, but I would have to revisit it for detailed rebuttal.

I don't think most people outside left-anarchism would equate "state" with the existence of any unjust hierarchies. Indeed, defining a state in that way seems to be begging the question with regard to anarchy's desirability and feasibility.

Whether or not Friedman provides ways to organise society without a state, he is clearly trying to do so, at least by any definition of "state" that a non-(left-anarchist) would recognise (e.g. an entity with a monopoly on legitimate violence).

1Michael Große4mo
I don't see where anonea2021 has made that claim. Did you mean to write "property" instead of "state" in this paragraph? (genuine question) Either way, I'm having trouble following what you want to say with this paragraph. What anonea2021 states: I can confirm that this indeed the view of every other lineage of anarchists that I'm aware of. The anarchist's goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today's world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus "anarcho-"capitalism.
6Will Bradshaw4mo
Yes, it seems like there's some crossed wires here. I claimed that ancaps are "clearly trying to formulate a way for a capitalist society to exist without a state". The intended implicature was that since anarchy = the absence of a state (according to common understanding, the dictionary definition [https://www.dictionary.com/browse/anarchy], and etymology) it was therefore proper to call them anarchists. anonea2021 responded with "From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies." I was confused about this, since it didn't seem like a direct response to my claims. I wasn't sure whether to read it as (a) a claim that unjust hierarchies = a state (which seemed like a bad definition of "state"), or (b) a claim that anarchism wasn't actually about the absence of a state but instead about abolishing unjust hierarchies in general (which seemed like a bad, question-begging definition of "anarchism", given that ~everyone wants to minimise unjust hierarchies). I tried to respond to the superposition of these two interpretations, which probably led to my phrasing being more confusing than it needed to be. As before, this begs the question. Everyone wants to minimise unjust hierarchies, so that's not a useful description of anarchism. People who disagree about which hierarchies are unjust, what interventions are effective for reducing them, and what the costs of those interventions are, will end up advocating for radically different systems of government. Some of those will end up advocating for a society without a state, and it's useful to refer to that subset of positions as "anarchist" even if they are very different from each other. Anarcho-capitalism is really quite different from other forms of capitalist social organisation, and its distinctive feature is the absence of a coercive state. "Anarcho-capitalism" is thus a completely appropriate name for it – indeed, it's hard to see what

First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.

The most important updates I got from the paper:

  1. Put less weight on technological determinism. In particular, defining existential risk in terms of a society reaching "technological maturity" without falling prey to some catastrophe frames technological development as being largely inevitable. But I'd argue even under the "techno-utopian" view, many technological developments are not needed for "technological maturity", or at least not for a very long time. While I still tend to view development of things like advanced AI systems as hard to stop (lots of economic pressures, geographically dispersed R&D, no expert consensus on whether it's good to slow down/accelerate), I'd certainly like to see mor
... (read more)
5anonymousEA5mo
I think you switched the two by accident Otherwise an excellent comment even if I disagree with most of it, have an updoot
2AdamGleave5mo
Thanks, fixed.

Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.

One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.

Some highlights:

I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be. 

On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence  might affect society and expansions of IPCC models that include permafrost methane release feedback loops.

On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys ... (read more)

1jchen13mo
"I have regularly seen proposals in the community to stop and regulate AI development" - Are there any public ones you can signpost to or are these all private proposals?

The section on expected value theory seemed unfairly unsympathetic to TUA proponents 

  • The question of what we should do with pascal's mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
  • EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature. 
  •  Alternatives to expected value have received far less attention in the literature and also have many problems
  • eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters. 

So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.

2Jack Malde5mo
You seem to assume that voting / engaging in political advocacy are all obviously important things to do and that any argument that says don't bother doing them falls prey to a reductio ad absurdum, but it's not clear to me why you think that. If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal's Mugging when doing them, then one might rationally decide not to do them. Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I'd imagine voting would stop being a Pascal's Mugging action and would be associated with a reasonably high probability of having a positive payoff.
2Patrick5mo
One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail. If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it's only one sentence plus a citation):
-3anonymousEA5mo
Which alternatives to EV have what problems for what uses in what contexts? Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data? What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by "thought leaders" with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
8Will Bradshaw5mo
Why is writing a sequence of snarky rhetorical questions preferable to just making counter-arguments?
-15anonymousEA5mo

Everything written in the post above strongly resonates with my own experiences, in particular the following lines:

the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:

  • Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stu
... (read more)

i do think there is a difference between this article and stuff from people like Torres, in terms of good faith

I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.

I've seen "in bad faith" used in two ways:

  1. This person's argument is based on a lie.
  2. This person doesn't believe their own argument, but they aren't lying within the argument itself.

While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.

(See this comment for more.)

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

In the case at hand, I think what's going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unr... (read more)

 One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins. 

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Of course, there are many levels to what a "personal vendetta" might entail, and there are real trade-offs to whatever policy you establish. But I'm wary of taking the most extreme approach in any direction ("let's just ignore Phil entirely").... (read more)

6Pablo5mo
Thanks for the comments. They have helped me clarify my thoughts, though I feel I'm still somewhat confused. Yes, I agree that this is a concern. I am reminded of an observation by Nick Bostrom [https://www.nickbostrom.com/revolutions.pdf]: So I recognize both that it is sometimes legitimate (and even required) to refuse to engage with arguments based on how they originated, and that a norm that licenses this behavior has significant abuse potential. I haven't thought about ways in which the norm could be refined, or about heuristics one could adopt to decide when to apply it. I'd like to see someone (Greg Lewis?) investigate this issue more. I mostly agree. My sense is that we often misclassify as "specific piece[s] of evidence that would be damning if true" things that should be assessed as part of a much larger whole. E.g. it is sometimes relevant to consider the sheer number of things someone has said when deciding how outraged to be that this person said something seemingly outrageous.
9weeatquince5mo
Agree with this.

Yes I think that is fair.

At the time (before he wrote his public critique) I had not yet realised that Phil Torres was acting in bad faith.

Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).

Strong upvote from me - you’ve articulated my main criticisms of EA.

I think it’s particularly surprising that EA still doesn’t pay much attention to mental health and happiness as a cause area, especially when we discuss pleasure and suffering all the time, Yew Kwang Ng focused so much on happiness, and Michael Plant has collaborated with Peter Singer.

In your view, what would it look like for EA to pay sufficient attention to mental health?

To me, it looks like there's a fair amount of engagement on this:

  • Peter Singer obviously cares about the issue, and he's a major force in EA by himself.
  • Michael Plant's last post got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
  • Forum posts with the "mental health" tag generally seem to be well-received.
  • Will MacAskill invited three very prominent figures to run an EA Forum AMA on psychedelics as a promising mental health intervention.
  • Founders Pledge released a detailed cause area report on mental health, which makes me think that a lot of their members are trying to fund this area.
  • EA Global has featured several talks on mental health.

I can't easily find engagement with mental health from Open Phil or GiveWell, but this doesn't seem like an obvious sign of neglect, given the variety of other health interventions they haven't closely engaged with.

I'm limited her... (read more)

I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.

TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.

I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didn't seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time ("we're not sure you can measure feelings", "we're worried about experimenter demand effect", etc.). I'd typically point out their concerns had already been addressed in the literature, but that still didn't seem to make them more interested. (I don't recall anyone ever mentioning 'item response theory', which Luke raises as his objection.) In the end, I got the ... (read more)

Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.

To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.

I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team... (read more)

Hello Luke, thanks for this, which was illuminating. I'll make an initial clarifying comment and then go on to the substantive issues of disagreement.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

I'm not sure what you mean here. Are you saying GiveWell didn't repeatedly ignore the work? That Open Phil didn't? Something else? As I set out in another comment, my experience with GiveWell staff was of being ignored by people who weren't at that familiar with the relevant literature - FWIW, I don't recall the concerns you raise in your notes being raised with me. I've not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spoken - so I'm not able to comment regarding that.

Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, I'm not sure exactly what your objections are and, in so far as I do, they ... (read more)

Hi Michael,

I don't have much time to engage on this, but here are some quick replies:

  • I don't know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn't say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
  • On "weakly validated measures," I'm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/thoughts on the topic.
  • On "unconvincing intervention studies" I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, I'm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
  • On "wrong statistical test," I'm referring to the section called "Older studies used inappropriate statistical methods" in the linked conversation notes with Joel H
... (read more)

Hello Luke,

Thanks for this too. I appreciate you've since moved on to other things, so this isn't really your topic to engage on anymore. However, I'll make two comments.

First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind.

Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this.  I did have a few interactions with various Open Phil staff in 2021,  but I wouldn't say I've got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB  - Alex Berger's recent technical update didn't comment on this - and 2.  doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.

This is an interesting conversation. It’s veering off into a separate topic. I wish there was a way to “rebase” these spin-off discussions into a different place. For better organisation.

Thank you Luke – super helpful to hear!!

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area? (Founder's Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldn't seem to be blameworthy.)

I can't say much more here without knowing the details of how Michael/others' work was received when they presented it to funders. The situation I've outlined seems to be compatible both with "this work wasn't taken seriously enough" and "this work was taken seriously, but seen as a weaker thing to fund than the things that were actually funded" (which is, in turn, compatible with "funders were correct in their assessment" and "funders were incorrect in their assessment"). 

That Michael felt dismissed is moderate evidence for "not taken seriously enough". That his work (and other work like it) got a bunch of engagement on the Forum is weak evidence for "taken seriously" (what the Forum cares about =/= what funders care about, but the correlation isn't 0). I'm left feeling uncertain about this example, but it's certainly reasonable to argue that mental health and/or SWB hasn't gotten enough attention.

(Personally, I find the case for additional work on SWB more compelling than the case for additional work on mental health specifically, and I don't know the extent to which HLI was trying to get funding for one vs. the other.)

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area?

Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.

 

I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good). 

Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:

  • OpenPhil – AOK.
    • OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
  • GiveWell – hmmm could do better.
    • GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mention
... (read more)
4anonymousEA5mo
Strong upvote for this if nothing else. (the rest is also brilliant though, thank you so much for speaking up!)

A few thoughts on the democracy criticism. Don't a lot of the criticisms here apply to the IPCC? "A homogenous group of experts attempting to directly influence powerful decision-makers is not a fair or safe way of traversing the precipice."  IPCC contributors are disproportionately white very well-educated males in the West who are much more environmentalist than the global median voter, i.e. "unrepresentative of humanity at large and variably homogenous in respect to income, class, ideology, age, ethnicity, gender, nationality, religion, and professional background." So, would you propose replacing the IPCC with something like a citizen's assembly of people with no expertise in climate science or climate economics, that is representative wrt some of the demographic features you mention? 

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.

You seem to assume that we should be especially suspicious of a view if it is not held by a majority of the global population. Over history, the views of the global majority seem to me to have been an extremely poor guide to accurate moral beliefs. For example, a few hundred years ago, most people had abhorrent views about  animals, women and people of other races. By the arguments here, do you think that people like Benjamin Lay, Bentham and Mill should not have advocated for change in these areas, including advocating for changes in policy? 

As I said in a different but related context earlier this week, "If a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past."

I do think we should worry about failure modes and being wrong. But I think the main reason to do that is that people are often wrong, they are bad at reasoning, and subject to a host of biases. The fact that we are in a minority of the global population is an extremely weak indicator of being wrong. The majority has been gravely wrong on many moral and empirical questions in the past and today. It's not at all clear that the base rate of being wrong for 'minority view' vs 'majority view' is higher or not, and that question is extremely difficult to answer because there are lots of ways of slicing up the minority you are referring to.

I feel like there's just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We're more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.

On the other hand I think there's some distinction to be drawn between "minority view disagrees with strongly held majority view" and "minority view concerns something that majority mostly ignores / doesn't have a view on".

that is a fair point. departures from global majority opinion still seems like a pretty weak 'fire alarm' for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign. 

I see how this could be misread. I'll reformulate the statement; 
"If our small, non-representative group comes to a conclusion, we should wonder, given base rates about correctness in general and the outside view, about which failure modes have affected similar small groups in the past, and consider if they apply, and how we might be wrong or misguided."

So yes, errors are common to all groups, and being a minority isn't a indicator of truth, which I mistakenly implied. But the way in which groups are wrong is influenced by group-level reasoning fallacies and biases, which are a product of both individual fallacies and characteristics of the group. That's why I think that investigating how previous similar groups failed seems like a particularly useful way to identify relevant failure modes.

3John G. Halstead5mo
yes I agree with that.
6anonea20215mo
I think it's simplistic to reduce the critique to "minority opinion bad". At the very least, you need to reduce it to "minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad". Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo

My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not - that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate. 

I don't think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?

nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires. 

8Guy Raveh5mo
This is moving money to mostly wealthy, Western organisations and researchers, that would've otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.

I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is  a faulty way to assess the overall effect on global wealth inequality - the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.  

Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don't think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries. 

I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I'm saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.

I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.

But looking at it as an outsider, it's obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk - or even existential risk as a whole - is some orders of magnitude less important than it's laid out to be in EA - then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.

-2ESRogs5mo
Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate? I don't think either claim is true (or even close to true).

It's also not the claim being made:

...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...

6ESRogs4mo
You're right, my mistake.
6James Ozden5mo
I had the same reaction as this, in that the dominant worldview today views extreme levels of animal suffering as acceptable but most of us would agree it's not, and believe we should do our utmost to change it. I think the difference between the examples you've mentioned and the parallel to existential risk is with the qualifier Luke and Carla provided in the text (emphasis mine): Where the key difference is that the study of existential risk is tied to the fate of humanity in ways that animal welfare, misogyny and racism aren't (arguably the latter two examples might influence the direction of humanity significantly but probably not whether humanity ceases to exist). I'm not necessarily convinced that existential risk studies is so different to the examples you've mentioned that we need to approach it in a much more democratic way but I do think the qualifiers given by the authors mean the analogies you've drawn aren't that water-tight.
5Dr. David Mathers5mo
Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I'm not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.

I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries - look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.

Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I'm not sure it's true in this specific case.

9Dr. David Mathers5mo
Yeah, you're probably right. It's just I got a strong "history=Western history" vibe from the comment I was responding to, but maybe that was unfair!

i'd be pretty surprised if almost everyone didn't have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.

3jackva5mo
I think there is a "not" missing: "view if it is held by a majority of the global population."
2John G. Halstead5mo
sorry, yeah corrected

minor point but I don't think you've described citizen's assemblies in the most charitable way. Yes, it is a representative sortition of the public so they don't necessarily have expertise in any particular field but there is generally a lot of focus on experts from various fields who inform the assembly. So in reality, a citizen's assembly on climate would be a random selection of representative citizens who would be informed/educated by IPCC (or similar) scientists, who would then deliberate amongst themselves to reach their conclusions.These conclusions one would hope would be similar to what the scientists would recommend themselves as it based on information largely provided by them.

For people that might be interested, here is the report of the Climate Assembly (a citizen's assembly on climate commissioned by the UK government) that in my opinion, had some fairly reasonable policy suggestions. You can also watch a documentary about it by the BBC here.

The paper never spoke about getting rid of experts or replacing experts with citzens. So no. 

Many countries now run citizen assemblies on climate change, which I'm sure you're aware of. They do not aim to replace the role of IPCC. 

EA or the field of existential risk cannot be equated with the IPCC. 

To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper. 

It would help for clarity if I understood your stance on central bank independence. This seems to produce better outcomes but also seems undemocratic. Do you think this would be legitimate?

It still seems like, if I were Gates, donating my money to the US govt would be more democratic than eg spending it on climate advocacy? Is the vision for Open phil that they set up a citizen's assembly that is representative of the global population and have that decide how to spend the money, by majority vote?

As in the discussion above, I think you're being disingenuous by claiming government is "more democratic." 

And if you were Gates, I'd argue that it would be even more democratic to allow the IPCC, which is more globally representative and less dominated by special interests that the US government, to guide where you spend your money than it would to allow the US government to do so. And given how much the Gates foundation engages with international orgs and allows them to guide his giving, I think that "hand it to the US government" would plausibly be a less democratic alternative than the current approach, which seems to be to allow GAVI, the WHO, and the IPCC to suggest where the money can best be spent.

And having Open Phil convene a consensus driven international body on longtermism actually seems somewhat similar to what the CTLR futureproof report co-written by Toby Ord suggests when it says the UK should lead by, "creating and then leading a global extreme risks network," and push for "a Treaty on the Risks to the Future of Humanity." Perhaps you don't think that's a good idea, but I'm unclear why you would treat it as a reductio, except in the most straw-man form.

Hi David, I wasn't being disingenuous. Here, you say "I think you're being disingenuous by claiming government is "more democratic."  In your comment above you say "One way to make things more democratic is to have government handle it, but it's clearly not the only way." Doesn't this grant that having the government decide is more democratic? These statements seem inconsistent. 

So, to clarify before we discuss the idea, is your view that all global climate philanthropy should be donated to the IPCC?

I think there is a difference between having a citizen's assembly decide what to do with all global philanthropic money (which as I understand it, is the implication of the article), and having a citizen's assembly whose express goal is protecting the long-term (which is not the implication of the article). If all longtermist funding was allocated on the first mechanism, then I think it highly likely that funding for AI safety, engineered pandemics and nuclear war would fall dramatically. 

The treaty in the CTLR report seems like a good idea but seems quite different to the idea of democratic control proposed in the article.

4Davidmanheim5mo
Comparing how democratic government is to different things yield different results, because democratic isn't binary. Yes, unitary action by a single actor is less democratic than having government handle things, and no, having the US government handle things is not clearly more democratic than deferring to the IPCC. But, as I'm sure you know, the IPCC isn't a funding body, nor does it itself fight climate change. So no, obviously climate philanthropy shouldn't all go to them. No, and clearly you need to go re-read the paper. You also might want to look into the citations Zoe suggested that you read above, about what "democratic" means, since you keep interpreting in the same simplistic and usually incorrect way, as equivalent to having everyone vote about what to do. This goes back to the weird misunderstanding that democratic is binary, and that it always refers to control. First, global engagement and the treaty are two different things they advise the UK government. Second, I'm sure that the authors can say for themselves whether they see international deliberations and treaties as a way to more democratic input, but I'd assume that they would say that it's absolutely a step in the direction they are pointing towards.

Hi David. We were initially discussing whether giving the money to govts would be more democratic. You suggested this was a patently mad idea but then seemed to agree with it. 

Here is how the authors define democracy: "We understand democracy here in accordance with Landemore as the rule of the cognitively diverse many who are entitled to equal decision-making power and partake in a democratic procedure that includes both a deliberative element and one of preference aggregation (such as majority voting)"

You say: "You also might want to look into the citations Zoe suggested that you read above, about what "democratic" means, since you keep interpreting in the same simplistic and usually incorrect way, as equivalent to having everyone vote about what to do."

equal political power and preference aggregation entails majority rule or lottery voting or sortition. Your own view that equal votes aren't a necessary condition of democracy seems to be in tension with the authors of the article. 

A lot of the results showing the wisdom of democratic procedures depend on certain assumptions especially about voters not being systematically biased. In the real world, this isn't true so so... (read more)

You're using a word differently than they explicitly say they are using the same word. I agree that it's confusing,  but will again note that consensus decision making is democratic in thes sense they use, and yet is none of the options you mention. (And again, the IPCC is a great example of a democratic deliberative body which seems to fulfill the criteria you've laid out, and it's the one they cite explicitly.)

On the validity and usefulness of democracy as a method of state governance, you've made a very reasonable case that it would be ineffective for charity, but in the more general sense that Landemore uses it, which includes how institutions other than governments can account for democratic preferences, I'm not sure that the same argument applies.

That said, I strongly disagree with Cremer and Kemp about the usefulness of this approach on very different grounds. I think that both consensus and other democratic methods, if used for funding, rather than for governance, would make hits based giving and policy entrepreneurship impossible, not to mention being fundamentally incompatible with finding neglected causes.

9James Ozden5mo
I think your Open Phil example could be an interesting experiment. Do you think that if Open Phil commissions a citizen's assembly to allocate their existential risk spending and the input is given by their researchers / program officers, it would be wildly different to what they would do themselves? In any scenario, I think it would be quite interesting as surely if our worldviews and reasoning are strong enough to claim big unusual things (e.g. strong longtermism) we should be able to convince a random group of people that they hold? and if not, is that a problem with the people selected, our communication skills or the thinking itself? I personally don't think it would be a problem with the people (see past successe [https://www.involve.org.uk/resources/blog/opinion/citizens-assembly-behind-irish-abortion-referendum] s of citizen's assemblies [https://www.climateassembly.uk/recommendations/index.html])* so shouldn't we be testing our theories to see if they make sense under different worldviews and demographic backgrounds? and if they don't seem robust to other people, we should probably try integrate the reasons why (within reason of course). *there's probably some arguments to be made here that we don't necessarily expect the allocation from this representative group, even when informed perfectly by experts, to be the optimal allocation of resources so we're not maximising utility / doing the most good. This is probably true but I guess the balance of this with moral uncertainty is the trade-off we have to live with? Quite unsure on this though, seems fuzzy

Hi James, I do think it would be interesting to see what a true global citizen's assembly with complete free rein would decide. I would prefer that the experiment were not done with Open Phil's money as the opportunity cost would be very high. A citizen's assembly with longtermist aims would also be interesting, but would be different to what is proposed in the article. Pre-setting the aims of such an assembly seems undemocratic.

I would be pretty pessimistic about convincing lots of people of something like longtermism in a citizen's assembly - at least I think funding for things like AI, engineered viruses and nuclear war would fall a fair amount. The median global citizen is someone who is strongly religious, probably has strong nationalist and socialist beliefs (per the literature on voter preferences in rich countries, which is probably true in poorer countries), unwilling to pay high carbon taxes, homophobic etc. 

For what it's worth, I wasn't genuinely saying we should hold a citizen's assembly to decide what we do with all of Open Phil's money, I just thought it was an interesting thought experiment. I'm not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen's assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).

To play devil's advocate, I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can't see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn't affect their taxes). 

Also, I don't think you've given much convincing evidence that a citizen's assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can't say I have much evidence myself except for th... (read more)

2GMcGowan5mo
I vaguely remember reading something about religious people worrying less about extinction, but I don't remember whether that was just intuition or an actual study. They may also be predisposed to care less about certain kinds of risk, e.g. not worrying about AI as they perceive it to be impossible. (these are pretty minor points though)

I think you're unaware of the diversity and approach of the IPCC. It is incredibly interdisciplinary, consensus driven, and represents stakeholders around the world faithfully. You should look into what they do and their process more carefully before citing them as an example.

Then, you conflated "democratically" with "via governments, through those government's processes" which is either a bizarre misunderstanding, or a strange rhetorical game you're playing with terminology.

As mentioned, the vast majority of the authors are from a similar demographic background to EAs. The IPCC also produces lots of policy-relevant material on eg the social costs of climate change and the best paths to mitigation, which are mainly determined by white males.

Here is a description of climate philanthropy as practiced today in the United States. Lots of  unelected rich people who disproportionately care about climate change spend hundreds of millions of pounds advocating for actions and policies that they prefer. It would be a democratic improvement to have that decision made by the US government, because at least politicians are subject to competitive elections. So, having the decision made by the US government would be more democratic. Which part of this do you disagree with?

It seems a bit weird to class this as a 'bizarre misunderstanding' since many of the people who make the democracy criticism of philanthropy, such as Rob Reich, do in fact argue that the money should be spent by the government.   

7Davidmanheim5mo
"the vast majority of the authors are from a similar demographic background to EAs... mainly determined by white males" A key difference is having both representation of those with other perspectives and interests, and a process which is consensus driven and inclusive. "It would be a democratic improvement to have that decision made by the US government, because at least politicians are subject to competitive elections. So, having the decision made by the US government would be more democratic. Which part of this do you disagree with?" One way to make things more democratic is to have government handle it, but it's clearly not the only way. Another way to be more democratic would, again, by being more broadly representative and consensus driven. (And the switch from "IPCC" to "climate philanthropy as practiced today in the United States" was definitely a good rhetorical trick, but it wasn't germane to either the paper's discussion of the IPCC, or your original point, so I'm not going to engage in discussing it.)
8John G. Halstead5mo
in the second bit, I wasn't talking about the IPCC, I was talking about your second point "you conflated "democratically" with "via governments, through those government's processes"". The reason I mentioned climate philanthropy was because that is what I mentioned in my original comment you responded to: if you think philanthropy is undemocratic, then that also applies to climate philanthropy, which Luke Kemp is strongly in favour of, so this is an interesting test case for their argument.

First, are you backing away from your initial claims about the IPCC, since it in fact is consensus based with stakeholders rather than being either a direct democracy, or a unilateralist  decision.

Second, I'm not interested in debating what you say Luke Kemp thinks about climate philanthropy, nor do I know anything about his opinions, nor it is germane to this discussion.
But in your claims that you say are about his views, you keep claiming and assuming that the only democratic alternatives to whatever we're discussing are a direct democracy or control by a citizens' assembly (without expertise) or handing things to governments. Regardless of Luke's views elsewhere, that's certainly not what they meant in this paper. Perhaps this quote will be helpful;

We understand democracy here in accordance with Landemore as the rule of the cognitively diverse many who are entitled to equal decision-making power and partake in a democratic procedure that includes both a deliberative element and one of preference aggregation (such as majority voting)

As Landemore, who the paper cites several times,  explains, institutions work better when the technocratic advice is within the context of a inclusive decision procedure, rather than either having technocrats in charge, or having a direct democracy.

Hello, Yes i think it would be fair to back away a bit from the claims about the IPCC. it remains true that most climate scientists and economists are white men and they have a disproportionate influence on the content of the IPCC reports. nonetheless, the case was not as clear cut as I initially suggested. 

I find the second point a bit strange. Isn't it highly relevant to understand whether the views of the author of the piece we are discussing are consistent or not? 

It's also useful to know what the implication of the ideas are expressed actually are. They explicitly give a citizen's assembly as an example of a democratic procedure. Even if it is some other deliberative mechanism followed by a majority vote, I would still like to know what they think about stopping all climate philanthropy and handing decisions over all money over to such a body. It's pretty hard to square a central role for expertise with a normative idea of political equality. 

-12Davidmanheim5mo

nor it is germane to this discussion

I do think it is germane to the discussion, because it helps to clarify what the authors are claiming and whether they are applying their claims consistently. 

7Davidmanheim5mo
I was discussing this paper, which doesn't discuss climate philanthropy, not everything they have ever stated. I don't know what else they've claimed, and I'm not interested in a discussion of it.

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically.

I'm not fully sure that deciding which risks to take seriously in a democratic fashion logically leads to donating all of your money to the government. Some reasons I think this:

  • That implies that we all think our governments are well-functioning democracies but I (amongst many others) don't believe that to be true. I think it's fairly common sentiment and knowledge that political myopia by politicians, vested interests and other influences mean that governments don't implement policies that are best for their populations.
  • As I mentioned in another comment, I think the authors are saying that as existential risks affect the entirety of humanity in a unique way, this is one particular area where we should be deciding things more democratically. This isn't necessarily the case for spending on education, healthcare, animal welfare, etc, so there it would make sense you donate to institu
... (read more)
7anonea20215mo
Not really. The IPCC , and is not a political thinktank (even though climate risk deniers and minizers might like to claim it is), is funded at least by 65% by nation states and the UN [https://www.ipcc.ch/site/assets/uploads/2018/04/150820170305-Doc.-8-Report-on-the-Financial-Stability-of-the-IPCC.pdf] (44% USA in 2018, 25% by the next inheriting their democratic legitimacy w.r.t to funding any money) and fundamentally deals with something much more narrowly defined, empirically verifiable and graspable than the TUA main causes. It suffers from a lot of the same problems w.r.t representation and democracy as all of science and society does, but it's not nearly as donor-alignment-driven as the targets of the article
4MatthewDahlhausen5mo
The IPCC reports have hundreds of authors from all over the world: https://www.ipcc.ch/authors/. [https://www.ipcc.ch/authors/.] It is misleading to say the IPCC is homogeneous and that the authors are "disproportionately white very well-educated males in the West". Every country and a large variety of civil institutions are represented at the conference of parties, and they use a consensus process.
2Guy Raveh5mo
As always, I don't think this implication is necessarily bad. Individual philanthropy is not sustainable, undemocratic, and indeed might do more harm than good in that, in public perception, it takes the weight off the government's shoulder while in practice putting into it a fraction of the effort. I don't think this is the only side of the story, of course. I donate myself to organisations who I believe do important work but aren't in the consensus. I think governments are good for making some democratic decisions, but are very undemocratic on others, e.g. by only representing the country's population and not the entire world's, or by neglecting to represent animals or future generations. And I think organisations that operate independently of the government are good for putting checks and balances on it and preventing consolidation of power.

Regarding the risk that longtermism could lead people to violate rights, it seems to me like you could make exactly the same argument for any view that prioritises between different things. For instance, as Peter Singer has pointed out, billions of animals are tortured and killed every year. By exactly analogous reasoning, one could say that other problems 'dwindle into irrelevance' as other values are sacrificed at the altar of the astronomical expected value of preventing factory farming. So, this would justify animal rights terrorism and the like and other abhorrent actions

"Don't be fanatical about utilitarian or longtermist concerns and don't take actions that violate common sense morality"  is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.

Some examples: 

More generally, there's often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.

I haven't read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot o... (read more)

I also agree with this. There are many reasons for consequentialists to respect common sense morality. 

I was just making the point that the rhetorical argument about rights can pretty much be made about any moral view. eg The authors seem to believe that degrowth would be a good idea, and it is a built in feature of degrowth that it would have enormous humanitarian costs

-50anonymousEA5mo

I agree that there is an analogy to animal suffering here, but there's a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.

2MichaelStJules5mo
Depending on the view, legitimate self-defence and "other-defence" don't violate rights at all, and this seems close to common sense when applied to protect humans. Even deontological views could in principle endorse - but I think in practice today should condemn - coercively preventing individuals from harming nonhuman animals, including farmed animals, as argued in this paper [https://journalofcontroversialideas.org/article/1/1/137/htm], published in the Journal of Controversial Ideas, a journal led and edited by McMahan, Minerva and Singer [https://journalofcontroversialideas.org/page/133]. Of course, this conflicts with the views of most humans today, who don't extend similarly weighty rights/claims to nonhuman animals. EDIT: I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when you may have intended it to be interpreted legally.

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.

In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don't have different but comparably serious problems. But this assumption can't be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.

5MichaelStJules5mo
I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans' legal rights to protect nonhuman animals or future people could be morally permissible. I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone's birth would not violate that then non-existent person's rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people's rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it's not clear it would deserve the kind of priority it gets in EA. See this paper [https://www.cambridge.org/core/journals/canadian-journal-of-philosophy/article/abs/whats-wrong-with-human-extinction/D836D5BC13C24FE1DF2F144E40FAB728] , for example, which I got from one of Torres' articles and takes a contractualist approach. I think a rights-based approach could treat it similarly. It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don't get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don't know how they would normally respond to this. Considering "innocent threats" complicates things further, too, and it looks like there's disagreement over the permissibility of harming innocent threats to prevent harm caused by them. I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and wh
4MichaelStJules5mo
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person. So, what kinds of objectionable harms could be justified on such views? I don't think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
2Pablo5mo
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don't think I have anything interesting to say off the top of my head.

@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.

I also wanted to express that I'm sorry this experience has been so stressful. It's crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.

Hey Zoe and Luke, thank you for posting this and for writing the paper! I just finished reading it and found it thoughtful, detailed, and it gave me a lot to think about. It is the best piece of criticism I have read, and will recommend it to others looking for that going forward. I can see the care, time, and revisions that went into the piece. I am very sorry to hear about your experience of writing it. I think you contributed something important, and wish you had been met with more support. I hope the community can read this post and learn from it so we can get a little closer to that ideal of how to handle, incorporate, and respond to criticism. 

Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.

Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.

I think you contributed something important, and wish you had been met with more support. 

It seems valuable to separate "support for the action of writing the paper" from "support for the arguments in the paper". My read is that the authors had a lot of the former, but less of the latter.

From the original post:

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant. 

While "invalid" seems like too strong a word for a critic to use (and I'd be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper. 

Still, to the degree that ther... (read more)

This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite. 

I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening. 

I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in res... (read more)

It's hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences. 

I like to point out the below this isn’t favorable to you, basically, but I don't have any further deliberate goal and little knowledge in this space.

 

Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:

For background, note that Daron Acemoglu, who is really formidable, has criticized EA global health and development.
 

 

Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. It’s also hard to measure. I don’t agree and most EA don’t agree. 

 

The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.

One year later, the very first paper of one of the newly hired researchers makes a strong negative view ... (read more)

-3[anonymous]19d
What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your “account” is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction? Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say “Yep, you’re right, I’m ill-fit for my job and incapable of good academic work!” “Not favorable to you” is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcher’s actual boss might then be called upon to publicly refute. This is one of the slimiest posts I’ve ever seen, to be perfectly honest. Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how they’d react if someone did it to them.
-4Charles He19d
Nah
4[anonymous]19d
Nah what? Nah you don’t have any evidence? That would confirm my prior. Now why don’t you explain what you hoped to get out of that comment besides being grossly insulting to someone you don’t know on no evidential basis.
3Charles He19d
I don't agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway. (It's unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you won't agree upon) can be treated really dismissively. If you want, you can write with your real name (or PM me) and I will respond, if that's what you really want. Also, the downvote(s) on your comment(s) are mine.
8[anonymous]19d
I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named poster’s professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I don’t think that you treated it with a fraction of the seriousness it deserves. I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.
3Charles He19d
I think that you think I don't like your comments, but this isn't close to true. I really hope you will put your real name so I can give a real response. (I wouldn't share your name and generally wouldn't use PII if you PMed me.)
5[anonymous]19d
Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary. Unfortunately, I also couldn’t really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then I’d probably just PM you my real name. But I didn’t think that far ahead. Anyway, whatever else may be, I’m sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and others’) good.
1Charles He15d
Many of the comments in this comment chain, including the original narrative I wrote [https://forum.effectivealtruism.org/posts/gx7BEkoRbctjkyTme/democratising-risk-or-how-ea-deals-with-critics-1?commentId=KJQnnc7ndLQqMD7Td] , which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9. So probably a single person has come in and used a strong upvote or downvote of magnitude 9. While I am totally petty and vain, I don't usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so). In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected. So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative, it would be great to discuss. The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.
1Charles He15d
In an even deeper sense, as we all agree, EA isn't a social club for people who got here first. EA doesn't belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time). Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists. So, RE: "I would be more worried about making comments of the kind that you produced above under my real name." I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted. To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/-9 by the way). So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issue—in a deep sense, it was really about the very person who came in and strong voted! I'm not sure you see this (or that I see this fully either). From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.
1Charles He15d
Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if it's wrong. But I can't really elaborate on my narrative [https://forum.effectivealtruism.org/posts/gx7BEkoRbctjkyTme/democratising-risk-or-how-ea-deals-with-critics-1?commentId=KJQnnc7ndLQqMD7Td] . I can't defend myself, because it slags the OP, which isn't appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and that's why I wanted their identity). But you, the strong downvoter/upvoter, +9 dude, this is a really promising line of discussion. So come and reply?
5Linch19d
I think it's reasonable to not want to respond to an anonymous throwaway, but not reasonable to ask them to PM you their real name.
1Charles He19d
So, there is some normal sense where I might have a reason to want to them "legitimize" their criticism by identifying themselves (this reason is debatable, it could be weak or very strong). But the first comments from this person aren't just vitriolic and a personal attack, they are adamant demands for a significant amount of writing—they disagree greatly with me and so the explanation needed to bridge the opinion could be very long. The content of this writing has consequences, which is hidden to people without the explanation. Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are. Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or “evidence”, could be hurtful and stir up wounds. Others won't make sense at all. In this situation, it’s reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.
1Linch19d
Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/think it's worth going into. I agree it's often worthwhile to not engage. I agree with this.

Thanks for writing this reply, I think this is an important clarification.

Points where I agree with the paper:

  • Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
  • The long-term future, while important, should not completely dominate decision making.
  • Slowing down progress is a valid approach to mitigating X-risk, at least in theory.

Points where I disagree with the paper:

  • The papers argues that "for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent". I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about "virtue" since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
  • If ideas are promoted by an "elitist" minority, that doesn't automa
... (read more)

I enjoyed some of the discussion of emergency powers. It could be good to mention the response to covid. Leaving to one side whether such policies were justified (they do seem to have saved many lives), country-wide lockdowns were surely one of the most illiberal policies enacted in history, and explicitly motivated by trying to address a global disaster. Outside of genocide and slavery, I struggle to think of many greater restrictions on individuals freedom than confining essentially the entire population to semi house arrest. In many cases these rules were brought in under special emergency powers, and sometimes later determined to be illegal after judicial review. However, these policies were often extremely popular with the general population, so I'm not sure they fit the democracy-vs-illiberalism dichotomy the article is sort of going for.

EDIT: See Ben's comment in the thread below on his experience as Zoe's advisor and confidence in her good intentions.

(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)

I'd like to raise a point I haven't seen mentioned (though I'm sure it's somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.

If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC's "Against Bravery Debates"). This is a major red flag for me reading something on the internet about a community I know well. It's much easier to start an online discussion about how you're being silenced than to defend your key claims on the merits.  Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to "AnonymousEA"). 

A lot of EAs have a natural tendency to defend someone wh... (read more)

I believe these are authors already working at EA orgs, not "brave lone researchers" per se.

Thanks - I meant "lone" as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution. 

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me.

And since I've stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.

In particular, a post of the form: 

I have written a paper (link). 

(12 paragraphs of bravery claims)

(1 paragraph on why EA is failing)

(1 paragraph call to action)

Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their f... (read more)

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me…. [The post] strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations.

(Hopefully I'm not overstepping; I’m just reading this thread now and thought someone ought to reply.)

I’ve worked with Zoe and am happy to vouch for her intentions here; I’m sure others would be as well. I served as her advisor at FHI for a bit more than a year, and have now known her for a few years. Although I didn’t review this paper, and don’t have any detailed or first-hand knowledge of the reviewer discussions, I have also talked to her about this paper a fe... (read more)

6RAB4mo
Thanks Ben! That's very helpful info. I'll edit the initial comment to reflect my lowered credence in exaggeration or malfeasance.

Thanks for sharing this! Responding to just some parts of the object-level issues raised by the paper (I only read parts closely, so I might not have the full picture)--I find several parts of this pretty confusing or unintuitive:

  • Your first recommendation in your concluding paragraph is: "EA needs to diversify funding sources by breaking up big funding bodies." But of course "EA" per se can't do this; the only actors with the legal authority to break up these bodies (other than governments, which I'd guess would be uninterested) are these funding bodies themselves, i.e. mainly OpenPhil. Given the emphasis on democratization and moral uncertainty, it sounds like your first recommendation is a firm assertion that two people with lots of money should give away most of their money to other philanthropists who don't share their values, i.e. it's a recommendation that obviously won't be implemented (after all, who'd want to give influence to others who want to use it for different ends?). So unless I've misunderstood, this looks like there might be more interest in emphasizing bold recommendations than in emphasizing recommendations that stand a chance of getting implemented. And that
... (read more)

Other thoughts:

  • Some other comment hinted at this: another frame that I'm not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic--they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there'll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic--yes, it's undemocratic in a sense, but there's also an important sense in which the alternative is painfully undemocratic.
    • How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It's intuitive to me that (here and elsewhere) we won't be able to fully answer "to what extent should certain views be represented in this field?" without dealing with the object-level question "to what extent are these views right"? The paper seems to try to side-step th
... (read more)
5jchen13mo
Re your second point, a counter would be that the implementation of recommendations arising from ERS will often have impacts on the population around at the time of implementation, and the larger those impacts are the less possible specialization seems. E.g. if total utilitarians/longtermists were considering seriously pursuing the implementation of global governance/ubiquitous surveillance, this might risk such a significant loss of value to non-utilitarian non-longtermists that it's not clear total utilitarians/longtermists should be left to dominate the debate.
2Mauricio3mo
I mostly agree. I'm not sure I see how that's a counter to my second point though. My second point was just that (contrary to what the paper seems to assume) some amount of ethical non-representativeness is not in itself bad: Also, if we're worried about implementation of large policy shifts (at least, if we're worried about this under "business as usual" politics), I think utilitarians/longtermists can't and won't actually dominate the debate, because policymaking processes in modern democracies by default engage a large and diverse set of stakeholders. (In other words, dominance in the internal debates of a niche research field won't translate into dominance of policymaking debates--especially when the policy in question would significantly affect many people.)

Quick thoughts from my phone:

Thanks for writing this post, Carla and Luke. I am sorry to hear about your experiences, that sounds very challenging.

I also understand why people would object to your work, as many may have had high confidence in it having negative expected value.

It was surely a very difficult situation for all parties.

I am glad you are voicing concerns, I like posts like this.

At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.

I think that it's unavoidable that there will be a lot of strong disagreement in the EA community. It seems unavoidable in any group of diverse individuals who are passionately working together towards important goals. Of course, we should try to handle conflict well, but we shouldn't expect that it can ever be avoided or be completely pleasant.

I also understand why people don't express criticism publicl... (read more)

I agree with most of what you say other than it being reasonable for some people to have acted in self-interest. 

While I do think it is unavoidable that there will be attempts to shut down certain ideas and arguments out of the self-interest of some EAs, I think it's important that we have a very low tolerance of this.

4PeterSlattery5mo
Thanks for commenting :) I intended to present the-self interest part as bad, sorry. I agree, but I don't see this as 'shutting down' arguments. Can I just check that I am not misreading what happened? My interpretation from this was that they were strongly discouraged from publishing the paper by people who disagreed with what the paper was claiming (who may or may not have also been self-interested in maintaining their funding) and/or predicted negative outcomes from the work. They were still able to publish the paper and participate in the community. No-one was 'shut down' in the sense that someone forced them not to publish it (though they may have strongly advised against it). Is this correct? Maybe I misunderstand what "prevent this paper from being published" actually entailed.

Ah okay.

I think I interpreted this as ‘pressure’ to not publish, and my definition of ‘shutting down ideas’ includes pressure / strong advice against publishing them, while yours is restricted to forcing people not to publish them.

At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.

 

I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see work that doesn't get published, so this is basically a catch-22 (either it gets published, in which cases there isn't censorship, or it doesn't get published, in which case no one ever hears about it).

Also, most censorship is soft rather than hard, and comes via chilling effects.

(I'm not intending this response to make any further object-level claims about the current situation, just that the quoted argument is not a good argument.)

I think it is disappointing that so many comments are focusing on arguing with the paper rather than discussing the challenges outlined in the post. From a very quick reading I don't find any of the comments here unreasonable but I do find them to be talking about a different topic. It would be better if we could separate out the discussion of "red teaming" EA from the discussion of this particular paper

The paper is very well written, crisp and communicates its points very well.

The paper includes characterizations of longtermists that seem schematic and many would find unfair. 

In the post itself, there are serious statements that add a lot of heat to the issue and are hard to approach.

I think that this is a difficult time where many people are getting/staying out away, or performing emotional labor, for what are genuinely difficult experiences of the OP. 

This isn't ideal for truthseeking. 

If I was in a different cause area with a similar issue, I wouldn't want a lot of longtermists coming in and pulling on these  threads, I don't think that is the ideal or right thing to do.

Interesting, I was thinking the opposite! I was thinking, "There's so many interesting specific suggestions in this paper and people are just caught up on whether or not they like diversity initiatives generally and what they think of the tone on this paper, how annoying."

I just mean this could have been two posts - one about the paper and one about the experience of publishing the paper. Both would be very valuable.

I agree it would have been better to have this as two posts – I'm personally finding it difficult to respond to either the paper or the post, because when I focus on one I feel like I'm ignoring important points in the other.

That said, the fact that both are being discussed in a single post is down to the authors, not the commenters. I think it's reasonable for any given commenter to focus on one without justifying why they're neglecting the other.

Yeah I agree. I disagree with most of the paper, but I find the claims about pressures not to publish criticism  troubling. 

3Khorton5mo
Completely agree!

How do we solve this?

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

If I imagine myself dependent on the funding of someone, that would change my behaviour. Anyone have any ideas of how to get around this? 

- Tenure is the standard academic approach but does that lead to better work overall
- A wider set of funders who will fund work even if it attacks the other funders?
- OpenPhil making a statement to fund high quality work they disagree with
- Some kind of way to anonymously survey EA academics to get a sense of if there is a point that everyone thinks but it too scared to say
- Some kind of prediction market on views that are likely to be found to be wrong in the future.

I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.

I'm a fan of the CEEALAR funding model -- giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.

Most these ideas sound interesting to me. However —

- OpenPhil making a statement to fund high quality work they disagree with

I'm not quite sure what this means? I'm reading it as "funding work which looks set to make good progress on a goal OP don't believe is especially important, or even net bad".  And that doesn't seem right to me.

Similar ideas that could be good —

  • OP/other grantmakers clarifying that they will consider funding you on equal terms even if you've publicly criticised OP/that grantmaker
  • More funding for thoughtful criticisms of effective altruism and longtermism (theory and practice)

I'm especially keen on the latter!

6Kerkko Pelttari5mo
Perhaps a general "willingness to commit" X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea. (e.g. if "pro current X-risk" research in general gets N funding then some % of N would be made available for "critical work" in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)

Sounds good. At the more granular and practical end, this sounds like red-teaming, which is often just good practice.

Strong upvote, especially to signal my support of

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

Maybe my models are off but I find it hard to believe that anyone actually said that. Are we sure people said "Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" 

That sounds to me like a thing only cartoon villains would say. 

I might be able to provide a bit of context:

I think the devil is really in the details here. I think there are some reasonable versions of this. 

The big question is why and how you're criticizing people, and what that reveals about your beliefs (and what those beliefs are).

As an extreme example, imagine if a trusted researcher came out publicly, saying,
"EA is a danger to humanity because it's stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down."

If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.

It's possible to use criticism to improve a field or try to destroy it.

I'm a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)

I know less about this certain circumstance, I'm just pointing out how the other side would see it.

This is all reasonable but none of your comment addresses the part where I'm confused. I'm confused about someone saying something that's either literally the following sentence, or identical in meaning to: 

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding." 

If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.


That part of the example makes sense to me. What I don't understand is the following:

In your example, imagine you're a friend, colleague, or an acquaintance of that researcher who considers publishing their draft about how EA needs to be stopped because it's slowing down AGI. What do you tell them? It seems like telling them "The reason you shouldn't publish this piece is that you [or "we," in case you're affiliated with them] might no longer get any funding" is a strange non sequitur. If you think they're right about their claim, it's really important to publish the article anyway. If you think they're wrong, there are still arguments in favor of discussin... (read more)

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" I have heard this multiple times from different sources in EA. 

This is interesting if true. With respect to this paper in particular, I don't really get why anyone would advise the authors not to publish it. It doesn't seem like it would affect CSER's funding, since as I understand it (maybe I'm wrong) they don't get much EA money and it's hard to see how it would affect FHI's funding situation. The critiques don't seem to me to be overly personal, so it's difficult to see why publishing it would be overly risky. 

3Khorton5mo
Why "if true"? Why would Joey misrepresent his own experiences?
3John G. Halstead5mo
yeah fair i didn't mean it like that
2Evan_Gaensbauer4mo
Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I'd be willing to have that conversation in private but for the record I expect it'd be better to have it in public, even if you'd only be vague about it.
1Andre_C4mo
I think the rationale behind making such a statement is less about specific funding for the individuals making that statement, but for the EA movement as a whole and goes roughly: Most of the funding EA has is coming from a small number of high-net-worth individuals and they think donating to EA is a good idea because of their relationship and trust into central figures in EA. By criticising those figures, you decrease the chance of these figures pulling more high-net-worth individuals to donate to EA. Hence, criticising central figures in EA is bad. (Not saying that I agree with this line of reasoning, but it seems plausible to me that people would make such a statement because of this reasoning.)

Very happy to have a private chat and tell you about our experience then. 

I'm curious about this and would be happy to hear more about it if you're comfortable sharing. I'll get in touch (and would make sure to read the full article before maybe chatting)! 

0Davidmanheim5mo
I want to flag that "That sounds to me like a thing only cartoon villains would say." is absolutely contrary to discourse norms on the forum. I don't think it was said maliciously, but it's definitely not "kind," and it does not "approach disagreements with curiosity." Edit: Clearly, I read this very differently than others, and given that, I'm happy to retract my claim that this was mean-spirited.

When I wrote my comment, I worried it would be unkind to Zoe because I'm also questioning her recollection of what people said.

Now that it looks like people did in fact say the thing exactly the way I quoted it (or identical to it in meaning and intent), my comment looks more unkind toward Zoe's critics.  

Edit: Knowing for sure that people actually said the comment, I obviously no longer think they must be cartoon villains. (But I remain confused.) 
 

fwiw I was not offended at all. 

4John G. Halstead5mo
I'm a bit lost, are you saying that the quotes you have seen were or were not as cartoon villainish as you thought?

I haven't seen any quotes but Joey saying he had the same experience, Zoe confirming that she didn't misremember this part, and none of the reviewers speaking up saying "This isn't how things happened," made me update that maybe one or more people actually did say the thing I considered cartoonish.

And because people are never cartoon villains in real life, I'm now trying to understand what their real motivations were. 

For instance, one way I thought of how the comment could make sense is if someone brought it up because they are close to Zoe and care most about her future career and how she'll be doing, and they already happen to have a (for me very surprising) negative view of EA funders and are pessimistic about bringing about change. In that scenario, it makes sense to voice the concerns for Zoe's sake.

Initially, I simply assumed that the comment must be coming from the people who have strong objections to (parts of) Zoe's paper. And I was thinking "If you think the paper is really unfair, why not focus on that? Why express a concern about funding that only makes EA look even worse?"

So my new model is that the people who gave Zoe this sort of advice may not have been defend... (read more)

It might be useful to hear from the reviewers themselves as to the thought process here. As mentioned above, I don't really understand why anyone would advise the authors not to publish this. For comparison, I have published several critiques of the research of several Open Phil-funded EA orgs while working at an open phil-funded EA org. In my experience, I think if the  arguments are good, it doesn't really matter if you disagree with something Open Phil funds. Perhaps that is not true in this domain for some reason?

This is also how I interpreted the situation.

(In my words: Some reviewers like and support Zoe and Luke but are worried about the sustainability of their funding situation because of the model that these reviewers have of some big funders. So these reviewers are well-intentioned and supportive in their own way. I just hope that their worries are unwarranted.)

4Guy Raveh5mo
I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don't see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).
[-]Aaron Gertler5mo Moderator Comment14

As a moderator, I thought Lukas's comment was fine.

I read it as a humorous version of "this doesn't sound like something someone would say in those words", or "I cast doubt on this being the actual thing someone said, because people generally don't make threats that are this obvious/open".  

Reading between the lines, I saw the comment as "approaching a disagreement with curiosity" by implying a request for clarification or specification ("what did you actually hear someone say"?). Others seem to have read the same implication, though Lukas could have been clearer in the first place and I could be too charitable in my reading.

Compared to this comment, I thought Lukas's added something to the conversation (though the humor perhaps hurt more than helped).

*****

On a meta level, I upvoted David's comment because I appreciate people flagging things for potential moderation, though I wish more people would use the Report button attached to all comments and posts (which notifies all mods automatically, so we don't miss things):

5MaxRa5mo
I appreciated Lukas‘ comment as I had the same reaction. The idea somebody would utter this sentence and not cringe about having said something so obviously wrongheaded feels very off. I think adding something like „Hey, this specific claim would be almost shockingly surprising for my current models /gesturing at the reason why/“is a useful promp/invitation for further discussion and not unkind or uncurios.
-31anonymousEA5mo

Good for you!

I'm sad that this seemed necessary, and happy to see that despite some opposition, it was written published. I sincerely hope that the cynics saying it could damage your credibility or careers are wrong, and that most of the criticisms are not as severe as they may seem - but if so, it's great that the issues are being pointed out, and if not, it's critical that they are.

Thanks for writing this! It seems like you've gone through a lot in publishing this. I am glad you had the courage and grit to go through with it despite the backlash you faced. 

Sorry if this is a bit of a tangent but it seems possible to me to frame a lot of the ideas from the paper as wholly uncontroversial contributions to priorities research. In fact I remember a number of the ideas being raised in the spirit of contributions by various researchers over the years, for which they expected appreciation and kudos rather than penalty.

(By “un-/controversial” I mean socially un-/controversial, not intellectually. By socially controversial I mean the sort of thing that will lead some people to escalate from the level of a truth-seeking discussion to the level of interpersonal conflict.)

It think it’s more a matter of temperament than conviction that I prefer the contribution framing to a respectful critique. (By “respectful” I mean respecting feelings, dignity, and individuality of the addressees, not authority/status. Such a respectful critique can be perfectly irreverent.) Both probably have various pros and cons in different contexts.

But one big advantage of the contribution framing seems to be that it makes the process of writing, review, and publishing a lot less stressful because it avoids antagonizing people – even though they ideally shouldn’t feel ant... (read more)

A lot of this comments are at their heart debating technocracy vs populism in decision-making. A separate conversation on this topic has been started here: https://forum.effectivealtruism.org/posts/yrwTnMr8Dz86NW7L4/technocracy-vs-populism-including-thoughts-on-the

Thanks for sharing this, Zoe!

I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don't agree with all your points or the ways you frame them.

Things that would make me excited to read future work, and IMO would make that work stronger:

  • Providing more concrete suggestions for improvement. Criticism is valuable, but I'm aware of many of the weaknesses of our frameworks; what I'm really hungry for is further work on solving them. This probably requires focusing down to specific areas, rather than casting a wide net as you did for this summary paper. 
  • Engaging with the nuances of longtermist thinking on these subjects. For example, when you mention the importance of risk-factor assessment, I don't see much engagement with e.g. the risk factor / threat / vulnerability model, or with the paper on defense in depth against AI risk. Neither of these models are perfect, but I expect they both have useful things to offer.
    • I expect this links up with the above point. Starting from a viewpoint of what-can-I-build  encourages finding the strong points of prior work, rather than the weak points you focused on in this piece.

What does TUA stand for?

Techno-utopian approach (via paper abstract)

6berglund5mo
Thanks!
7Patrick5mo
I would've found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here's a relevant excerpt from the paper:

How could we solve this?

Singer started the Journal of Controversial Ideas, which lets people publish under pseudonyms. 

https://journalofcontroversialideas.org/

Maybe more should try and publish criticisms there, or there could be funding for an EA specific journal with similar rules.

I guess there are problems with this suggestion, let me know what they are.

I like the idea of setting up a home for criticisms of EA/longtermism. Although I guess the EA Forum already exists as a natural place for anyone to post criticisms, even anonymously. So I guess the question is — what is the forum lacking? My tentative answer might be prestige / funding. Journals offer the first. The tricky question on the second is: who decides which criticisms get awarded? If it's just EAs, this would be disingenuous.

I think people don't appreciate how much upvotes and especially downvotes can encourage conformity.

Suppose a forum user has drafted "Comment C", and they estimate an 90% chance that it will be upvoted to +4, and a 10% chance it will be downvoted to -1.

Do we want them to post the comment? I'd say we do -- if we take score as a proxy for utility, the expected utility is positive.

However, I submit that for most people, the 10% chance of being downvoted to -1 is much more salient in their mind -- the associated rejection/humiliation of -1 is a bigger social punishment than +4 is a social reward, and people take those silly "karma" numbers surprisingly seriously.

It seems to me that there are a lot of users on this forum who have almost no comments voted below 0, suggesting a revealed preference to leave things like "Comment C" unposted (or even worse, they don't think the thoughts that would lead to "Comment C" in the first place). People (including me) just don't seem very willing to be unpopular. And as a result, we aren't just losing stuff that would be voted to -1. We're losing stuff which people thought might be voted to -1.

(I also don't think karma is a great proxy for utility... (read more)

We can’t afford to wait for a “Long Reflection”.

Alternatively, the "Long Reflection" has already begun, it's just not very evenly distributed. And humanity has a lot of things to hash out.

At an object level, I appreciate this statement on page 15:

If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.

At a meta level, thank you for your bravery and persistence in publishing this paper. I've added some tags to this post, including Criticism of the effective altruism community.

I'm happy with more critiques of total utilitarianism here. :) 

For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.

I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.").

I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks

I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people's preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.

Clarificatory question - are you arguing here that  stagnation at the current level of technology would be a good thing? 

If so, there seem to be several problems with this. this seems like a very severe bound on human potential. even in the richest countries in the world, most people work for a ~third of their life in jobs they  find incredibly boring. 

It also seems like this would expose us to indefinite risk from engineered pandemics. what do you make of that risk?

It also seems unlikely that climate change will be fixed without very strong technological progress in things like zero carbon fuels, energy storage etc.

-1anonea20215mo
Consider that this might be coming from a techno-utopian perspective itself? We could very plausibly stop or at least delqay climate change by drastically reducing the use of technology right now (COVID bought us a few months just by shutting down planes although that has "recovered" now ) [https://www.bbc.com/news/science-environment-59148520] and focus on rolling out existing technology. And there are (granted, fringe) political positions [https://en.wikipedia.org/wiki/Anarcho-primitivism]that argue industrialisation and maybe even agriculture was a mistake and critique the "civilisation" narrative (and no, not all of them are arguing to abandon medicine and live like cavemen, it's more nuanced than that). I'm not saying you are "wrong", I'm saying that the instinct to judge coming up with a magic technology to allow economic growth and the current state of life while fixing climate change as more likely than global coordination to use existing technology in more sustainable ways feels techno-utopian to me. Technology causes problems? Just add more technology!

In my view, covid is a very  dramatic counter-example to the benefits of technological stagnation/degrowth for the climate. Millions of people died, billions of people were locked in their homes for months on end and travel was massively reduced. In spite of that, emissions in 2020 merely fell to 2011 levels. The climate challenge is to get to net zero emissions. A truly enormous humanitarian cataclysm would be required to make that happen without improved technology. 

On your last paragraph, the instinct you characterise as techno-utopian here just seems to me to be clearly correct. It just seems true that we are more likely to solve climate change by making better low carbon tech than we are to get everyone to get all countries to agree to stop all technological progress. Consider emissions from cars. Suppose for the sake of argument that electric cars were as advanced as they were ten years ago and were not going to improve. What, then, would be involved in getting car emissions to zero? On your approach, the only option seems to be for billions of people to give up their cars, and for them only to be accessible to people who can afford a $100k Tesla. That approach is  obviously less likely to succeed than the 'techno-optimist' one of making electric cars better (which is the path we have taken, with significant success)

4Davidmanheim5mo
That's obviously a false dilemma. Investing in better use of technology and new technology is great, but doesn't help without reforms that internalize the externalities of climate change. If we don't subsidize CCS or tax net carbon, unless it's somehow cheaper to capture it than to let it remain in the air, we won't reduce CO2 in the atmosphere just with technology, and we'll end up with tons of additional warming.

Hi David, I was arguing against this point:

"I'm saying that the instinct to judge  coming up with a magic technology to allow economic growth and the current state of life while fixing climate change as more likely than global coordination to use existing  technology in more sustainable ways feels techno-utopian to me."

So, the author was saying that s/he thinks we are more likely to solve climate change by global coordination with zero technological progress than we are through continued economic growth and technological progress. I argued that this wasn't true. This isn't a false dichotomy, I was discussing the dichotomy explicitly made by the author in the first place. 

My claim is that without technological progress in electricity, industry and transport we are extremely unlikely to solve climate change, which is the point that luke kemp seems to disagree with. 

4Davidmanheim5mo
Ah. Yes, that makes sense. And it seems pretty clear that I don't disagree with you on the factual question of what is likely to work, but I also don't know what Luke thinks other than what he wrote in this paper, and I was confused about why it was being brought up.

How is this a false dilemma?

  • Stop all technological progress
  • Advance low carbon technology

Technically it omits a third option (technological progress in areas other than low carbon technology) but it certainly seems to cover all the relevant possibilities to me. Whether we have carbon taxes and so on is a somewhat separate issue: Halstead is arguing that without technological progress, sufficiently high carbon taxes would be ruinously expensive. 

3Davidmanheim5mo
The presented dilemma omits the possibility that we can allow for technological progress while limiting the deployment of some technologies - like coal power plants and fossil fuel burning cars. That's what makes it a false dilemma [https://en.wikipedia.org/wiki/False_dilemma]- it presupposes that the only alternative is to stop all technology, which isn't the only alternative.

but this is differential technological development, which the authors strongly reject. The author and commenter explicitly ask us to consider how well we would fare if we stopped technological progress entirely

8Davidmanheim5mo
The authors don't reject differential technological development as much as they claim that no real case has been made for it in the relevant domains. Specifically, "why this is more tractable or effective than bans, moratoriums and other measures has not been fully explained and defended." But that statement by the authors, and others I have found, aren't claims that all technological progress should be stopped. So I think this is a false dilemma. For example, their suggested approach applies to the way that the world has managed previous dangerous technologies like nuclear weapons and bioweapons - we ban use and testing, largely successfully, instead of the idea they reject, which would be, I guess, differentially preferring to fund defense-dominant technology because use of nuclear and bioweapons is inevitable, and assuming that due to the technological completion hypothesis, the technology can't be stopped.

Technology causes problems? Just add more technology!

"it's more nuanced than that".

Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation). 

I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I've worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I've worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I've prev... (read more)

I haven't opened the paper yet - this is a reply to the content of the forum post.

Thank you for writing it. I completely agree with you: EA has to not only tolerate critics, but also encourage critical debate among its members.

Disabling healthy thought processes for fear of losing funding is disastrous, and puts in question the effectiveness of such begotten funding.

I furthermore agree with all the changes you suggested the movement should make.

Is there a non-PDF version of the paper available? (e.g. html)

From skimming a couple of the argments seem to be the same I brought up here so I'd like to read the paper in full, but knowing myself I won't have the patience to get through a 35 page pdf.

I'm not affiliated with EA research organizations at all (I participate in running a local group at Finland and am looking for industry / other EA affiliated career options more so than specifically research).

 

However I have had multiple discussions with fellow local EA:s where it was deemed problematic that some X-risk papers are subject to quite "weak" standards of criticism relative to how much they often imply. Heartfelt thanks to you both for publishing and discussing this topic. And starting up conversation on the important meta-topic of EA research topic and funding decisionmaking and standards.


Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.

What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.

Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of min... (read more)

I'm genuinely not sure why I'm being downvoted here. What did I say?

I think it's because you're making strong claims without presenting any supporting evidence. I don't know what reading lists you're referring to; I have doubts about not asking questions being an 'unspoken condition' about getting access to funding; and I have no idea what you're conspiratorially alluding to regarding 'quasi-censorship' and 'emotional blackmail'.

I also feel like the comment doesn't seem to engage much with the perspective it criticizes (in terms of trying to see things from that point of view). (I didn't downvote the OP myself.) 

When you criticize a group/movement for giving money to those who seem aligned with their mission, it seems relevant to acknowledge that it wouldn't make sense to not focus on this sort of alignment at all. There's an inevitable, tricky tradeoff between movement/aim dilution and too much insularity. It would be fair if you wanted to claim that EA longtermism is too far on one end of that spectrum, but it seems unfair to play up the bad connotations of taking actions that contribute to insularity, implying that there's something sinister about having selection criteria at all, without acknowledging that taking at least some such actions is part of the only sensible strategy.

I feel similar about the remark about "techbros." If you're able to work with rich people, wouldn't it be wasteful not to do it? It would be fair if you wanted to claim that the rich people in EA use their influence in ways that... what is even the claim here? That their idiosyncrasies end up having an outsized effect? ... (read more)

8Guy Raveh5mo
If that will happen whenever a rich person is passionate about a cause, then opting to work with rich people can cause more harm than good. Opting out certainly doesn't have to be "wasteful".
5Lukas_Gloor5mo
My initial thinking was that "idiosyncrasies" can sometimes be neutral or even incidentally good. But I think you're right that this isn't the norm and it can quickly happen that it makes things worse when someone only has a lot of influence because they have money, rather than having influence because they are valued by their peers for being unusually thoughtful. (FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves?)
0Guy Raveh5mo
I'm not saying I know anything to the contrary - but I'd like to point out that we have no way of knowing. This is a major disadvantage of philanthropy - where governments are required to be transparent regarding their fund allocations, individual donors are given privacy and undisclosed control over who receives their donations and what organisations are allowed to use them for.

My apologies, specific evidence was not presented with respect to...

  • ...the quasi-censorship/emotional blackmail point because I think it's up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I'm anonymous for a reason.
  • ...the funding issue because either it fits the first category of "areas where I don't have a right to name names" (cf. "...any critique of central figures in EA would result in an inability to secure funding from EA sources..." above) or because the relevant information would probably be enough to identify me and thus destroy my career.
  • ...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I'm afraid we have prior form - see https://www.simonknutsson.com/problems-in-effective-altruism-an
... (read more)
4anonymousEA5mo
Again, I'm really not sure where these downvotes are coming from. I'm engaging with criticism and presenting what information I can present as clearly as possible.
-81Charles He5mo

Personally I more or less agreed with you and I don't think you were as insensitive as people suggested. I work in machine learning yet I feel shining a light on the biases and the oversized control of people in the tech industry is warranted and important.

2quinn5mo
the word "techbros" signals you have a kind of information diet and worldview that I think people have bad priors about

IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.

If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP's point for them...

I think that the dismissive and insulting language is at best unhelpful - and signaling your affiliations by being insulting to people you see as the outgroup seems like a bad strategy for engaging in conversation.

I apologise, I don't process it that way, I was simply using it as shorthand.

The "content" here is that you refer to the funders you dislike with slurs like "techbro". It's reasonable to update negatively in response to that evidence.

-23anonymousEA5mo

Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists,  believing they should only look at the argument itself rather than the source. It isn't good epistemics to ignore the source of an argument and their potential biases (though it isn't good epistemics to dismiss them out of hand either based on that, of course).

6anonymousEA5mo
I don't see a dichotomy between "ignoring the source of an argument and their potential biases" and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires. Based on your final line I'm not sure we disagree?
7quinn5mo
I think it's plausible that it's hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here [https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=pvXtqvGfjATkJq7N2] a little questioning how important aesthetic preferences may be. I think it's plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I'm not convinced.

>the idea of [...] the NTI framework [has] been wholesale adopted despite almost no underpinning peer-review research.

I argue that the importance-tractability-crowdedness framework is equivalent to maximizing utility subject to a budget constraint.

Re the undue influence of TUA on policy, you say 

"An obvious retort here would be that these are scholars, not decision-makers, that any claim of elitism is less relevant if it refers to simple intellectual exploration. This is not the case. Scholars of existential risk, especially those related to the TUA, are rapidly and intentionally growing in influence. To name only one example noted earlier, scholars in the field have already had “existential risks” referenced in a vision-setting report of the UN Secretary General. Toby Ord has been referenced, ... (read more)

"Toby Ord shouldn't seek to influence policy" is not the message I get from that paragraph, fwiw.

It comes across to me as "Toby Ord and other techno-optimists already have policy influence [and so it's especially important for people who care about the long-term future to fund researchers from other viewpoints as well]."

I'm obviously not the authors; maybe they did mean to say that you and Toby Ord should stop trying to influence policy. But that wasn't my first impression.

8John G. Halstead5mo
That wasn't how I interpreted it but perhaps I am an outlier given the voting on this comment.

I thought it was clear, in context, that the point made was that a minority shouldn't be in charge, especially when ignoring other views. (You've ignored my discussion of this in the past, but I take it you disagree.)

That doesn't mean they shouldn't say anything, just that we should strive for more representative views to be presented alongside theirs - something that Toby and Luke seem to agree with, given what they have suggested in the CTLR report, in this paper, and elsewhere.

[+][comment deleted]5mo 4