Luke Kemp and I just published a paper which criticises existential risk for lacking a rigorous and safe methodology: 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225

It could be a promising sign for epistemic health that the critiques of leading voices come from early career researchers within the community. Unfortunately, the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset.

We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.

What you see here is version 28. We have had approximately 20 + reviewers, around half of which we sought out as scholars who would be sceptical of our arguments. We believe it is time to accept that many people will disagree with several points we make, regardless of how these are phrased or nuanced. We hope you will voice your disagreement based on the arguments, not the perceived tone of this paper.

We always saw this paper as a reference point and platform to encourage greater diversity, debate, and innovation. However, the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views. Making the case for democracy was heavily contested, despite reams of supporting empirical and theoretical evidence. In contrast, the idea of differential technological development, or the NTI framework, have been wholesale adopted despite almost no underpinning peer-review research. I wonder how much of the ideas we critique here would have seen the light of day, if the same suspicious scrutiny was applied to more orthodox views and their authors.

We wrote this critique to help progress the field. We do not hate longtermism, utilitarianism or transhumanism,. In fact, we personally agree with some facets of each. But our personal views should barely matter. We ask of you what we have assumed to be true for all the authors that we cite in this paper: that the author is not equivalent to the arguments they present, that arguments will change, and that it doesn’t matter who said it, but instead that it was said.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

Many EAs we showed this paper to exemplified the ideal. They assessed the paper’s merits on the basis of its arguments rather than group membership, engaged in dialogue, disagreed respectfully, and improved our arguments with care and attention. We thank them for their support and meeting the challenge of reasoning in the midst of emotional discomfort. By others we were accused of lacking academic rigour and harbouring bad intentions. 

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant.  It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA. 

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

The greatest predictor of how negatively a reviewer would react to the paper was their personal identification with EA. Writing a critical piece should not incur negative consequences on one’s career options, personal life, and social connections in a community that is supposedly great at inviting and accepting criticism.

Many EAs have privately thanked us for "standing in the firing line" because they found the paper valuable to read but would not dare to write it. Some tell us they have independently thought of and agreed with our arguments but would like us not to repeat their name in connection with them. This is not a good sign for any community, never mind one with such a focus on epistemics. If you believe EA is epistemically healthy, you must ask yourself why your fellow members are unwilling to express criticism publicly. We too considered publishing this anonymously. Ultimately, we decided to support a vision of a curious community in which authors should not have to fear their name being associated with a piece that disagrees with current orthodoxy. It is a risk worth taking for all of us. 

The state of EA is what it is due to structural reasons and norms (see this article). Design choices have made it so, and they can be reversed and amended. EA fails not because the individuals in it are not well intentioned, good intentions just only get you so far.

EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes. I now believe EA needs to make such structural adjustments in order to stay on the right side of history. 

Comments313
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

I just want to say that I think this is a beautifully accepting response to criticism. Not defensive. Says hey yes maybe there is a problem here. Concretely offers time and money and a plan to look into things more. Really lovely, thank you Will. 

Thanks for stating this publically  here Will! 

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?

Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I bel... (read more)

[anonymous]52
0
0

I would agree that the article is too wide-ranging. There's a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics. 

4
Eevee🔹
Where? The paper doesn't mention economic growth at all.

The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate  abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.

There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as ... (read more)

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential."

Point taken. Thank you for pointing this out.

"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

I think this is more about stopping the development of specific technologies - for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn't necessarily related to degrowth - for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.

2
anonymousEA
I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.

Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.

3
anonymousEA
It is morally tenable under some moral codes but not others. That's the point.

Several times the case against the TUA was not actually argued


I think that they didn't try to oppose the TUA in the paper, or make the argument against it themselves. To quote: "We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field."

I also think that they don't need to prove that others are wrong to show that the lack of diversity has harms - as you agreed.

If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. 

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.

The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to c

... (read more)

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.


If not the funders,  do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I'm not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.

To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.

-6
Guy Raveh
7
Guy Raveh
I mostly agree with your comments, but I think we need to stop referring to specific people as leaders of the movement. Will MacAskill's opinion is not really more important than anyone else's.

I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that's good, since he's spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)

5
CarlaZoeC
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued.  If that was the case then most of the key texts in  x-risk would all be poorly argued. Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis.  Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. As  David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it's hard to find  how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently.  Those two criticisms are in tension.  If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research).  You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again,  your points are in tension.  I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than ar

Hi Carla,

Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.

First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.

You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values".  Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemic... (read more)

-60
anonymousEA

I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.

In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.

Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!


You claim that EA needs to...

diversify funding sources by breaking up big funding bodies 

Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in th... (read more)

While I appreciate that we're all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.

Especially given the claim that "EA needs to make such structural adjustments in order to stay on the right side of history".

[anonymous]124
0
0

The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true. 

Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:

"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on... (read more)

That was my reading of VWH too - as a pro tanto argument for extreme surveillance and centralized global governance, provided that the VWH is true. However, many of its proponents seem to believe that the VWH is likely to be true. I do agree that the authors ought to have interpreted the paper more carefully, though.

3
CarlaZoeC
* It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. * There’s some hedging in the article but… * He published in a policy journal, with an opening ‘policy implication’ box * He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.” * In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment. * The VWH was also published as a German book (why I don’t know…)
[anonymous]83
0
0

It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?

Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead's (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1] 

I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.

In my opinion, it's insuffici... (read more)

Emile Torres (formerly Phil) just admitted on their Twitter that they were a co-author of a penultimate version of this paper. It is extremely deceptive not to disclose their contribution this in the paper or in the Forum post. At the point this paper was written, Torres had been banned from the EA Forum and multiple people in the community had accused Torres of harassing them. Do you think that that might have contributed to the (alleged) reception to your paper? 

The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres. 

There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don't recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone i... (read more)

-9
LB

Here's a Q&A which answers some of the questions by  reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)

"Do you not think we should work on x-risk?"

  • Of course we should work on x-risk

 

"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"

  • No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas. 
  • But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.

 

"Do you hate longtermism?"

  • No. We are both longtermists (probs just not the techno utopian kind).

 

"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"

  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. 
  • There’s some hedging i
... (read more)

It’s been interesting to re-read the discussion of this post in light of new knowledge that Emile P Torres was originally a co-author. For example, Cremer instructs reviewers to ask why they might have felt like the paper was a hostile attack. Well, I’d certainly see why readers could have had this perception if they read it after Emile had already started publicly insinuating that various longtermists are sympathetic to white supremacy, are plagiarists.

Cremer also says some reviewers asked, “Do you hate longtermism?"

The answer she gives above is “No. We are both longtermists (probs just not the techno utopian kind)”, but it seems like the answer would have in fact been “Two of us do not, but one the authors does hate longtermism and has publicly called it incredibly dangerous”

Just noting that I strongly endorse both this format for responding to questions, and the specific responses.

6
Raven
With regard to harshness, I think part of the reason you get different responses is because you're writing in the genre of the academic paper. Since authors have to write in a particular formal style, it's ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it's not crazy to read their judgments into the text, but different readers will draw different conclusions about what  you want them to feel or believe. For example: As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it's easy to read into it some amount of value judgment around longtermism and longtermists. 

The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.

Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions. 

1
anonea2021
I think it's important to distinguish "anarcho"-capitalist thought (which still needs a state to enforce private property and capital rights and generally doesn't acknowledge the problems of monopolies, existing power imbalances etc.) and actual anarchist/anti-totalitarian policies.   All the things you mentioned except the last decentralise control from a democratic state to a moneyed elite, not to a more democratic state, confederation, anarchist commune or whatever.

I really dislike it when left-anarchist-leaning folks put scare quotes around "anarcho" in anarcho-capitalist. In my experience it's  a strong indicator that someone isn't arguing in good faith.

I'm not an ancap (or a left-anarchist), but David Friedman and his ilk is very clearly trying to formulate a way for a capitalist society to exist without a state. You might think their plans are unfeasible or undesirable (and I do), but that doesn't mean they're secretly not anarchists.

1
anonea2021
From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies. Using that label is like calling yourself "vegano-carnivore" because you want to reduce the suffering of eating animals as much as possible while still eating them. Even if you can come up with a justification on it by presenting clearly realizable ways to implement this (e.g. lab grown meat), it is adopting a label from a community that does not want them to do so. Indeed, there was already a ready-made label "laisez-faire", but that one has sufficiently negative historical associations that I guess it is to be avoided. Regarding Friedman,  I would challenge the statement that he provides ways to organize it without a state, given that he romantizices medieval iceland and the western frontier and I am highly skeptical that the law enforcement/military aspect required for enforcing capital right would not lead to the tyranny of the robber barons in their company towns again, but I would have to revisit it for detailed rebuttal.

I don't think most people outside left-anarchism would equate "state" with the existence of any unjust hierarchies. Indeed, defining a state in that way seems to be begging the question with regard to anarchy's desirability and feasibility.

Whether or not Friedman provides ways to organise society without a state, he is clearly trying to do so, at least by any definition of "state" that a non-(left-anarchist) would recognise (e.g. an entity with a monopoly on legitimate violence).

0
Michael Große
  I don't see where anonea2021 has made that claim. Did you mean to write "property" instead of "state" in this paragraph? (genuine question) Either way, I'm having trouble following what you want to say with this paragraph. What anonea2021 states: I can confirm that this indeed the view of every other lineage of anarchists that I'm aware of.  The anarchist's goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today's world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus "anarcho-"capitalism.
5
Will Bradshaw
Yes, it seems like there's some crossed wires here. I claimed that ancaps are "clearly trying to formulate a way for a capitalist society to exist without a state". The intended implicature was that since anarchy = the absence of a state (according to common understanding, the dictionary definition, and etymology) it was therefore proper to call them anarchists. anonea2021 responded with "From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies." I was confused about this, since it didn't seem like a direct response to my claims. I wasn't sure whether to read it as (a) a claim that unjust hierarchies = a state (which seemed like a bad definition of "state"), or (b) a claim that anarchism wasn't actually about the absence of a state but instead about abolishing unjust hierarchies in general (which seemed like a bad, question-begging definition of "anarchism", given that ~everyone wants to minimise unjust hierarchies). I tried to respond to the superposition of these two interpretations, which probably led to my phrasing being more confusing than it needed to be.  As before, this begs the question. Everyone wants to minimise unjust hierarchies, so that's not a useful description of anarchism. People who disagree about which hierarchies are unjust, what interventions are effective for reducing them, and what the costs of those interventions are, will end up advocating for radically different systems of government. Some of those will end up advocating for a society without a state, and it's useful to refer to that subset of positions as "anarchist" even if they are very different from each other. Anarcho-capitalism is really quite different from other forms of capitalist social organisation, and its distinctive feature is the absence of a coercive state. "Anarcho-capitalism" is thus a completely appropriate name for it – indeed, it's hard to see what other name would fit better. Also, it's wha

First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.

The most important updates I got from the paper:

  1. Put less weight on technological determinism. In particular, defining existential risk in terms of a society reaching "technological maturity" without falling prey to some catastrophe frames technological development as being largely inevitable. But I'd argue even under the "techno-utopian" view, many technological developments are not needed for "technological maturity", or at least not for a very long time. While I still tend to view development of things like advanced AI systems as hard to stop (lots of economic pressures, geographically dispersed R&D, no expert consensus on whether it's good to slow down/accelerate), I'd certainly like to see mor
... (read more)
5
anonymousEA
I think you switched the two by accident Otherwise an excellent comment even if I disagree with most of it, have an updoot
2
AdamGleave
Thanks, fixed.
[anonymous]90
0
0

The section on expected value theory seemed unfairly unsympathetic to TUA proponents 

  • The question of what we should do with pascal's mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
  • EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature. 
  •  Alternatives to expected value have received far less attention in the literature and also have many problems
  • eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters. 

So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.

2
JackM
You seem to assume that  voting / engaging in political advocacy are all obviously important things to do and that any argument that says don't bother doing them falls prey to a reductio ad absurdum,  but it's not clear to me why you think that. If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal's Mugging when doing them, then one might rationally decide not to do them. Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I'd imagine voting would stop being a Pascal's Mugging action and would be associated with a reasonably high probability of having a positive payoff.
2
Patrick
One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail. If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it's only one sentence plus a citation):
-2
anonymousEA
Which  alternatives to EV have what problems for what uses in what contexts? Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data? What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by "thought leaders" with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
8
Will Bradshaw
Why is writing a sequence of snarky rhetorical questions preferable to just making counter-arguments?
-14
anonymousEA

Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.

One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.

Some highlights:

I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be. 

On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence  might affect society and expansions of IPCC models that include permafrost methane release feedback loops.

On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys ... (read more)

2
jchen1
"I have regularly seen proposals in the community to stop and regulate AI development" - Are there any public ones you can signpost to or are these all private proposals?

Everything written in the post above strongly resonates with my own experiences, in particular the following lines:

the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:

  • Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stu
... (read more)
[anonymous]62
0
0

i do think there is a difference between this article and stuff from people like Torres, in terms of good faith

I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.

I've seen "in bad faith" used in two ways:

  1. This person's argument is based on a lie.
  2. This person doesn't believe their own argument, but they aren't lying within the argument itself.

While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.

(See this comment for more.)

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

In the case at hand, I think what's going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unr... (read more)

 One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins. 

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Of course, there are many levels to what a "personal vendetta" might entail, and there are real trade-offs to whatever policy you establish. But I'm wary of taking the most extreme approach in any direction ("let's just ignore Phil entirely").... (read more)

9
weeatquince
Agree with this.

Yes I think that is fair.

At the time (before he wrote his public critique) I had not yet realised that Phil Torres was acting in bad faith.

Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).

Strong upvote from me - you’ve articulated my main criticisms of EA.

I think it’s particularly surprising that EA still doesn’t pay much attention to mental health and happiness as a cause area, especially when we discuss pleasure and suffering all the time, Yew Kwang Ng focused so much on happiness, and Michael Plant has collaborated with Peter Singer.

In your view, what would it look like for EA to pay sufficient attention to mental health?

To me, it looks like there's a fair amount of engagement on this:

  • Peter Singer obviously cares about the issue, and he's a major force in EA by himself.
  • Michael Plant's last post got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
  • Forum posts with the "mental health" tag generally seem to be well-received.
  • Will MacAskill invited three very prominent figures to run an EA Forum AMA on psychedelics as a promising mental health intervention.
  • Founders Pledge released a detailed cause area report on mental health, which makes me think that a lot of their members are trying to fund this area.
  • EA Global has featured several talks on mental health.

I can't easily find engagement with mental health from Open Phil or GiveWell, but this doesn't seem like an obvious sign of neglect, given the variety of other health interventions they haven't closely engaged with.

I'm limited her... (read more)

I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.

TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.

I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didn't seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time ("we're not sure you can measure feelings", "we're worried about experimenter demand effect", etc.). I'd typically point out their concerns had already been addressed in the literature, but that still didn't seem to make them more interested. (I don't recall anyone ever mentioning 'item response theory', which Luke raises as his objection.) In the end, I got the ... (read more)

Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.

To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.

I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team... (read more)

Hello Luke, thanks for this, which was illuminating. I'll make an initial clarifying comment and then go on to the substantive issues of disagreement.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

I'm not sure what you mean here. Are you saying GiveWell didn't repeatedly ignore the work? That Open Phil didn't? Something else? As I set out in another comment, my experience with GiveWell staff was of being ignored by people who weren't at that familiar with the relevant literature - FWIW, I don't recall the concerns you raise in your notes being raised with me. I've not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spoken - so I'm not able to comment regarding that.

Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, I'm not sure exactly what your objections are and, in so far as I do, they ... (read more)

Hi Michael,

I don't have much time to engage on this, but here are some quick replies:

  • I don't know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn't say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
  • On "weakly validated measures," I'm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/thoughts on the topic.
  • On "unconvincing intervention studies" I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, I'm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
  • On "wrong statistical test," I'm referring to the section called "Older studies used inappropriate statistical methods" in the linked conversation notes with Joel H
... (read more)
9
MichaelPlant
Hello Luke, Thanks for this too. I appreciate you've since moved on to other things, so this isn't really your topic to engage on anymore. However, I'll make two comments. First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind. Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this.  I did have a few interactions with various Open Phil staff in 2021,  but I wouldn't say I've got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB  - Alex Berger's recent technical update didn't comment on this - and 2.  doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.

This is an interesting conversation. It’s veering off into a separate topic. I wish there was a way to “rebase” these spin-off discussions into a different place. For better organisation.

Thank you Luke – super helpful to hear!!