JGH

John G. Halstead

10728 karmaJoined Jan 2017

Bio

John Halstead - Independent researcher. Formerly Research Fellow at the Forethought Foundation; Head of Applied Research at Founders Pledge; and researcher at Centre for Effective Altruism. DPhil in political philosophy from Oxford

Comments
693

I thought I would add my current view here is less in line with my original comment and more in line with the OP. I think something like '9-5 EA' is probably the best approach for long-term impact. I've noticed that even if you're working really hard, it is usually difficult to squeeze that much more impact out of your work. This might in part depend on the type of person you are. Maybe some people can squeeze heroic amounts of effort out of themselves for long periods, but the vast majority of people cannot. One caveat might be if you have to push to finish an important project by some key deadline. But this would be a short-term situation, not a long-term approach

Thanks this is really useful. (I will try to go through this course, as well.) 

I'm not sure the talk has it quite right though. My take is that on the most popular definitions of alignment and capabilities, they are partly conceptually the same, depending on which intentions we are meant to be aligning with. So, it's not the case that there is a 'alignment externality' of a capabilities improvements, but rather that some alignment improvements are capabilities improvements, by definition. 

Thanks a lot for this!

To take one of your examples - faster and better chips (or more compute generally). It seems like this does actually improve alignment on perhaps the most popular definition of alignment as intent-alignment. In terms of answering questions from prompts, GPT-4 is more in line with the intentions of the user than GPT-3 and this is mainly due to more compute. I mean this in the sense that it producers answers that are better/more in line with what users want

"So there’s a technical problem of “what innate drives / reward function (if any) would lead to AIs that are honest, cooperative, kind, etc.?” And this problem is not only currently unsolved, but almost nobody is working on it."

I'm not sure I agree that nobody is working on the problem of which reward functions make AIs that are honest and cooperative. For instance, leading AI companies seem to me to be trying to make LLMs that are honest, and cooperative with their users (e.g. not threatening them). In fact, this seems to be a major focus of these companies. Do you think I am missing something?

Thanks for these comments and for the discussion. I do genuinely appreciate discussing things with you - I appreciate the directness and willingness to engage. I also appreciate that given how direct we both are and how rude I sometimes am/seem on here, it can create tension, and that is mainly my fault here. 

I think my cruxes are:

I suppose my broader point is that EA is <1% of social movements 'trying to do social good' in some broad sense. >98% of the remainder is focused on broadly 'do what sounds good' vibes, with a left wing valence, i.e. work on climate change, rich country education, homelessness, identity politics type stuff etc. Over the years, I have seen many proposals to make EA more like the remainder, or even just make it exactly the same as the remainder, in the name of diversity or pluralism. 

This strikes me as an Orwellian use of those terms. I don't think it would in any way create more pluralism or diversity to have EA shift in the direction of doing that kind of stuff. EA offers a distinctive perspective and I think it is valuable to have that in the marketplace of ideas to actually provide a challenge to what remains the overwhelmingly dominant form of thinking about 'trying to do good'. 

I also view the >98% as very epistemically closed; I don't think they are a good advert for an epistemic promised land for EAs. 

There is a powerful social force that I do not understand which means that every organisation that is not explicitly right wing eventually becomes left wing, and I have seen that dynamic at play repeatedly over the last 13 years, and I would view this as the latest example. EA is not focused on areas I would view as particularly left or right valenced at the moment. 

I am also very opposed to efforts to make hiring decisions according to demographic considerations. I think the instrumental considerations enumerated for doing this are usually weak on closer examination, and I think the commonsense idea that people who do best on work-related hiring criteria will be best at their job is fundamentally correct and the reason it is fundamentally correct are obvious. The idea that implicit bias against demographic groups could be driving demographic skews in EA also strikes me as extremely implausible. It is violently at odds with my lived experience of being on hiring panels or knowing about them at other organisations, and there being a very strong explicit bias against the typical EA demographic. The idea that implicit bias could be strong enough to overcome this is not credible. 

I am aware that I am setting my precious social capital alight in making these arguments (which is, I think, a lesson in itself)

I was thinking of all of the assumptions, i.e. about the severity of the winter and the adaptive response. 

Sorry if I'm being thick, but what do you mean by 'eating the seed corn' here?

Sorry but I won't rescind my comment. I don't know whether it is conscious lack of transparency or not, but it is not transparent, in my opinion. This is also indicated by Quinn above, and in Larks' comment. The dialectic on these posts goes:

  1. A categorical statement is made that 'diversity is a strength' or 'diversity of all kinds is always good'. 
  2. Myself or someone else presents a counterexample - eg note there are lots of homophobes, nationalists, Trump supporters etc who are underrepresented in EA
  3. The OP concedes in the comments that diversity of some kinds is sometimes bad, or doesn't respond. 
  4. A new post is released some time later repeating 1. 

I have made point 2 to you several times on previous posts, but in this post you again make a categorical claim that 'diversity is a strength' and that we need to move towards greater pluralism, when you actually endorse 'diversity is sometimes a strength, sometimes a weakness'. Like, in this post you say we need to take on 'non-Western-perspectives', but among very popular non-Western perspectives are homophobia and the idea that China should invade Taiwan, which you immediately disavow in the comments. 

But you here throw the baby out with the bathwater; its a deeply unsastifying solution when we have a good reason to have pluralism of method, vision of the future, epistemology and also greater diversities of many different factors to suggest that just because it may be possible to justify the inclusion of those you don't like on this logic, then we have to throw out the entire argument full stop.

I think the issue here is that it is incumbent upon you to provide criteria for how much diversity we want, otherwise your post has no substantive content because everyone already agrees that some forms of diversity are good and some are bad. The main post says/strongly gives the impression that more diversity of all kinds is always good because there is something about diversity itself that is good. In the comments, you walk back from this position. 

Correct me if I am wrong, but my understanding is that diversity is being used to defend the proposition that EA should engage in non-merit-based hiring that is biased with respect to race, gender, ability, nation, and socioeconomic status. 

all of this entails greater geographic, socio-economic, cultural, gender, racial and ability diversity, both in terms of those who may have interest in being a part of the community, and those whom the community may learn from.

I think this would be unfair, and strongly disagree that this would 'create a culture where a genuine proliferation of evidence-based insights can occur'. The diversity considerations you mention in the post also cannot defend it since they cannot distinguish good and bad forms of diversity. 

My claim was "Folding EA into extinction rebellion, which as I understand is the main aim of heterodox CSER-type approaches in EA". I would guess that you and (eg) Kemp would be happy with this, for instance. CSER researchers like Dasgupta have collaborated  papers with Paul Ehrlich who I think would also endorse this vibe, so I would guess Dasgupta is at least sympathetic. I basically think what I said is broadly correct, and I don't think there is much reason for me to correct the record. I would actually be interested in some sort of statement/poll from different groups in x-risk studies about their beliefs about the world.

In the post, you say "Much of existential risk reduction is political[19], so diverse and broad based coalitions can give us a useful political basis for action. Many different types of existential risk may have similar political causes[20], and a pluralistic community may open up new avenues for collaboration with those whom we each have common cause[21]." It does seem strange not to say in the main post that apparently almost all EAs would vote probably vote Labour or Democrat, so clearly something is amiss here by your own lights

I do find the emphasis on peer review and expertise hard to square with the radical democratic view, and I don't think that is a needle that can be threaded. If the majority were climate sceptics and were in favour of repealing all climate policy, it seems like you would have to be in favour of that given your radical democratic views but opposed to it because it is violently at odds with peer reviewed science. 

My understanding (appreciating I may be somewhat biased on this), is that the demand for greater expertise comes from what you and others perceive to be the lack of deference to peer reviewed science by EAs working on climate change (which I think isn't true fwiw because the 'standard EA view' is in line with the expert consensus on climate change) and the fact that there is not much peer reviewed work in AI and to a lesser extent bio (I'm sympathetic on AI). 

That aside, Yeah I have somewhat conflicted and not worked out thoughts on peer review in EA. 

  • As a statement of how I view things, I would generally be more inclined to trust a report with paid expert reviewers by Open Phil, or a blogpost or report by someone like Scott Alexander, Carl Shulman or Toby Ord, than the median peer reviewed study on a topic area. I think who write something matters a lot and explains a lot of the variation in quality, independent of the fora in which something is published. 
  • I generally think peer review is a bit of a mess compared to what we might hope for the epistemic foundation of modern society. Published definitely doesn't mean true. Most published research is false. Reviewers don't usually check the maths going into a paper. Political bias and seniority influences publishing decisions. The bias is worse in the most prestigious journals. Some fields are far worse than others and some should be completely ignored (eg continental philosophy, nutritional epidemiology). 'Experts' who know a lot of factual information on a topic area can systematically err because they have bad epistemics (witness the controversy about the causes of the Holocene megafauna extinction)
  • That being said, I think the median peer reviewed study is usually better than the typical EA forum or lesswrong blogpost. Given how thin the literature on AI is, the marginal value of yet another blogpost that isn't related to an established published literature seems low. In AI, the marginal value of more peer reviewed work seems high. But I also think the marginal value of more open phil reports with paid expert reviewers with published reviewer reports would probably be higher than peer review given how flawed peer review is and how much better the incentives are for the open phil-type approach

I don't know whether this is getting too into the weeds on realism, but the claim that the US national security establishment is less competent since the end of the Cold War seems straightforwardly incompatible with realism as a theory anyway since realism assumes that states rationally pursue their national interest. I have found this in interviews with Mearsheimer where he talks about 'Russia's' only rational option given US policy towards Ukraine, but then says that the US is not acting in its own national interest. Why can't Russia also not be acting in its own national interest? 

Once you grant that the US isn't pursuing its national interest, aren't you down the road to a public choice account, not a realist account?

I do think there is often a tension in what you write on this front. On the one hand, you seem to support radical democratic control of (every?) decision made by anyone anywhere. And on the other hand, you think we should all defer to experts. 

Load more