John G. Halstead

10100Joined Jan 2017

Bio

John Halstead - Independent researcher. Formerly Research Fellow at the Forethought Foundation; Head of Applied Research at Founders Pledge; and researcher at Centre for Effective Altruism. DPhil in political philosophy from Oxford

Comments
652

I strongly agree with a lot of your points here. To pick up on one strand you highlight, I think the fact that EA is very nerdy, and lacking 'street smarts' has been at the root of some (but not all) of the problems we've been seeing. I think it might be this rather than an intellectual commitment to assume good faith and tolerate weirdness that is the main issue, though maybe the first causes the second. Specifically, EAs seem to have been pretty naive in dealing with bad actors over the last few years and that persists to this day. 

If the problem is lack of street smarts, then we don't need to get into debates about being less weird because it's kind of unclear what it means, and hard to judge what margin of weirdness you want to move, which makes general debates about weirdness difficult. But it's pretty clear that we need to be more street smart. 

do the votes mean that it would be undemocratic to impose democratic rule?

Thanks for the detailed response. 

I agree that we don't want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn't be 'democratic' in any meaningful sense. 

  1. I don't have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word 'existential risk' doesn't change that fact. 
  2. Since you don't want diversity essentially along all dimensions, what sort of diversity would you like? You don't want Trump supporters; do you want more Marxists? You apparently don't want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase. 
  3. The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call 'TUA' (which I think is a bit of a silly label - how is it techno-utopian to think we're all going to be killed by technology?) is actually good, then value alignment on it seems good. 
  4. You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It's somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously.  I don't think it was rational for you to publish the post pseudonymously - I don't think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I'm not a researcher at a core EA org any more.)
    1. I don't think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn't heard of any of the authors of the interest rate piece before. 
    2. The main example you give is the reception to the Cremer and Kemp pice, but I haven't seen any evidence that they did actually get the reception they claimed. 
  5. I'm not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the  most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states "There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g."
    1.  "In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise." I don't think this gambit is open to you - your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding. 
    2. Of course, there are more things to life and to performance at work than intelligence. 
  6. As I mentioned in my first comment, it's not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don't help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. "the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal." One of the headings in your article is "We align suspiciously well with the interests of tech billionaires (and ourselves)". I don't see how anything you have said here is a good defence against my criticism of that claim.
  7. There's a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase  in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario. 
    1. A lot of what you say in your response and in your article seems inconsistent - you make a  point of saying that EAs ignore the experts but then dismiss the experts when that happens to be  inconsistent with your preferred opinions. Examples:
      1. Defending postcolonialism in global development 
      2. Your explanation of why Walmart makes money vs mainstream economics.
      3. Your dismissal of all climate economics and the IPCC
      4. 'Standpoint theory' vs  analytical philosophy
      5. Your dismissal of Bayesianism, which doesn't seem to be aware of any of the main arguments for Bayesianism. 
      6. Your dismissal of the g factor, which doesn't seem to be aware of the literature in psychology. 
      7. The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
      8. Your defence of critical realism 
      9. Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
    2. Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. "For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C - 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss." 
      1. Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It's kind of true but a bit misleading to say that this 'could'  lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like "humanity is a virus" are environmentalists.
      2. Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don't actually produce any quantitative evidence and it's completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include  effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There's nothing missing in not modelling the tails here. 
  8. ok
  9. What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it's still not very democratic - as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something? 
    1. We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn't give it money, and I doubt Moskovitz would either. I'm not sure what your proposal is for what we're supposed to do after this happens. 
  10. I actually think corporations are involved in collaborative mission-driven work,  and your Mondragon example seems to grant this, though perhaps you are understanding 'mission' differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said "Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule". The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person's influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions. 

"Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it." Evidently from the comments I'm not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on  identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment. 

If you had spent the post talking about free markets,  economic growth and admonishing the woke, I think people would have taken away a different message, but you didn't do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don't think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don't want more right wing people.  At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good. 

I see. I wasn't being provocative with my question, I just didn't get it

You should probably take out the claim that FLI offered 100k to a neo nazi group as it doesn't seem to be true

I'm somewhat confused as to why this is controversial. Why is it news that FLI didn't make a grant to a far right org?

I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are  weak. 

  1. You begin by citing the Cowen quote that "EAs couldn't see the existential risk to FTX even though they focus on existential risk". I think this is one of the more daft points made by  a serious person on the FTX crash. Although the words 'existential risk' are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn't enough attention to existential risks to FTX and the implications this would have for EA.  In contrast, EAs have put umpteen person hours into assessing existential risks to humanity and the epistemic standards used to do that are completely different to those used to assess FTX. 
  2. You cite research purporting to show that diversity of some form is good for collective epistemics and general performance. I haven't read the book that you cite, but I have looked into some of this literature, and as one might expect for a topic that is so politically charged, a lot of the literature is not good, and some of the literature actually points in the opposite direction, even though it is career suicide to criticise diversity, and there are likely personal costs even for me discussing counter-arguments here.  For example, this paper suggests that group performance is mainly determined by the individual intelligence of the group members not by things like gender diversity. This paper lists various costs of team diversity that are bad for collective dynamics. You say that diversity "essentially along all dimensions" is good for epistemics. This is the sort of claim that sounds good, but also seems to be clearly false. I seldom see people who make this argument suggest that we need more Trump supporters, religious fundamentalists, homophobes or people without formal education in order to improve our performance as a community. These are all huge chunks of the national/global community but also massively underrepresented in EA. There are lots of communities that are much more diverse than EA but which also seem to have far worse epistemics than EA. Examples include Catholicism, Trumpism, environmentalism, support of Bolsonaro/Modi etc.
  3. Relatedly, I think value alignment is very important. I have worked in organisations with a mix of EA and non EA people and it definitely made things much harder than if everyone were aligned, holding other things equal. On one level, it is not surprising that a movement trying to achieve something would agree not just at a very abstract level, but also about many concrete things about the world. If I think that stopping AI progress is good and you think it is bad, it is going to be much harder (though not impossible, per moral trade) for us to achieve things in the world. Same for speeding up progress in virus synthesis. The 80,000 Hours articles on goal directed groups are very good on this. 
  4. I don't agree that EA is hostile to criticism. In fact it seems unusually open to criticism, and rational discussion of ideas rather than dismissing them on the basis of vibe/mood affiliation/political amenability. Aside from the controversial Cremer and Kemp case (who didn't publish pseudonymously) what are the major critiques that have been presented pseudonymously or have caused serious personal consequences for the critics? By your definition, I think my critique of GiveWell counts as deep, but I have been  rewarded for this because people thought the arguments were good. To stress, mine and Hauke's claim was that most of the money EA has spent has been highly suboptimal. 
  5. You say "For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people." This isn't just an assumption of EA, but a central finding of psychological science that things that are usually classed as intellectual abilities are strongly correlated - the g factor. eg maths ability is correlated with social science ability, and english literature ability etc. 
  6. I just don't think it is true that we align well with the interests of tech billionaires. We've managed to persuade two billionaires of EA and one believed in EA before he became a billionaire. The vast majority of billionaires evidently don't buy it and go off and do their own thing, mainly donating to things that sound good in their country, to climate change, or not donating at all. Longtermist EAs would like lots more money to spent on AI alignment, slowing down AI progress, on slowing down progress in virology or increasing spending on counter-measures, and on preventing major wars. I don't see how any of these things promise to benefit tech founders as a particular constituency in any meaningful way. That being said, I agree that there is a problem with rich people becoming spokespeople for the community or overly determining what gets done and we need far better systems to protect against that in future. eg FTX suddenly deciding to do all this political stuff was a big break from previous wisdom and wasn't questioned enough. 
  7. On  a personal note, I get that I am a non-expert in climate, and so am wide open to criticism as an interloper (though I have published a paper on climate change). But then it is also true that getting climate people to think in EA terms is very difficult. Also, the view I recently outlined is basically in line with all climate economics. In that sense the view I hold and I think is widely held in longtermist EA is in line with one expert consensus. Indeed, it is striking that this is the one group that actually tries to quantify the aggregate costs of climate change. I also don't think there are any areas where I disagree with the line taken by the IPCC which is supposed to express the expert consensus on climate. The view that 4ºC is going to kill everyone is one held by some activists and a small number of scientists. Either way we need to explain why we are ignoring all the climate economists and listening to Rockstrom/Lenton instead. On planetary boundaries, as far as I know, I am the only EA to have criticised planetary boundaries, and I don't dismiss it in passing, but in fact at considerable length. The reviewer I had for that section is a prof and strongly agreed with me.
  8. Differential tech progress has been subject to peer review. The Bostrom articles on it are peer reviewed. 
  9. The implications of  democratising EA are mindboggling. Suppose that Open Phil's spending decisions are made democratically by EAs. This would put EAs in charge of ~$10bn. We'd then need to decide who counts as an EA. Because so much money would be on the table, lots of people who we wouldn't class as EAs would want a say, and it would be undemocratic to exclude them (I assume). So, the 'EA franchise' would expand to anyone who wants a say (?) I don't know where the money would end up after all this, but it's fair to say that money spent on reducing engineered pandemics, AI and farm animal welfare would fall from the current pitiful sum to close to zero. 
  10.  You say that worker self-management has been proven to be better for mission-oriented work than top-down rule. This is clearly false. There is a tiny pocket of worker cooperatives (eg in the Basque region) who have been fairly successful. But almost all companies are run oligarchially in a top-down fashion by boards or leadership groups. 

Overall, we need to learn hard lessons from the FTX debacle. But thus far, the collapse has mainly been used to argue for things that are completely unrelated to FTX, and mainly to an advance an agenda that has been disfavoured in EA so far, and with good reason. For Cowen, this was neoliberal progress, here it is left wing environmentalism. 

What do you make of the 'impatient philanthropy' argument? Do you think EAs should be borrowing to spend on AI safety?

The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There's no assumption that markets are a "mystical source of information" just that if you bet against them you almost always lose. 

There's also a nice "put your money where you mouth is" takeaway from the post, which AFAIK few short timelines people are  doing. 

  • I'm not sure they're middle of the road on civilisational vulnerability. It would be pretty surprising if extreme weather events made a big difference to the overall picture. For the kinds of extreme weather events one sees in the literature, it's just not a big influence on global GDP. How bad would a hurricane or flood have to be to push things from 'counterfactual GDP reduction of 5%' to civilisational collapse. 
  • I don't  think they fully discount/ignore the possibility of catastrophe 3/4ºC. In part this is just an outcome of the models and of the scientific literature. There are no impacts that come close to catastrophe in the scientific literature for 3/4ºC. I agree they miss some tipping points, but looking at the scientific literature on that, it's hard to see how it would make a big difference to the overall picture. 
  • I haven't read those papers and don't have time to do so now unfortunately. My argument there doesn't rely on one study but on a range of studies in the literature for different warm periods. The Permian was a very extreme and unusual case because it caused such massive land-based extinctions, which was caused by the release of halogens, which is not relevant to future climate change. Also, both the Permian and PETM were extremely hot relative to what we now seem to be in for (17ºC vs 2.5ºC). 
  • I'm not sure I see how I am not engaging with you on planetary boundaries. I thought we were disagreeing about whether to put weight on planetary boundaries, and I was arguing that the boundaries just seem made up. Using EV may have its own problems but that doesn't make planetary boundaries valid. 
  • I don't really see how the world now is more vulnerable to any form of weather events in any respect than it has been at any other point in human history. Society routinely absorbs large bad weather events; they don't even cause local civilisational collapse any more (in middle and high income countries). Deaths from weather disasters have declined dramatically over the last 100 or so years, which is pretty strong evidence that societal resilience is increasing not decreasing.  In the pre-industrial period, all countries suffered turmoil and hunger due to cold and droughts. This doesn't happen any more in countries that are sufficiently wealthy. Many countries now suffer drought, almost entirely due to implicit subsidies for agricultural water consumption. It is very hard to see how this could lead to eg to collapse in California or Spain. 
  • Can you set out an example of a cascading causal process that would lead to a catastrophe? 
  • I'm not sure that there is some meta-level epistemic disagreement, I think we just disagree about what the evidence says about the impacts of climate change.  In 2016, I was much more worried than the average FHI person about climate change, but after looking at the impacts literature and recent changes in likely emissions, I updated towards climate change being a relatively minor risk. Comparing to bio for instance, after reading about trends in gene synthesis technologies and costs, it takes about 30 minutes to see how it poses a major global catastrophic risk in the coming decades. I've been researching climate change  for six years and struggle to see it. I am not being facetious here, this is my honest take.
Load More