442Joined Nov 2019


Blog at The Good Blog


I did the summer fellowship last year and found it extremely useful in getting research experience,  having space to think about x-risk questions with others who were also interested in these questions, and making very valuable connections. I also found the fellowship very enjoyable. 

My experience with Atlas fellows (although there was substantial selection bias involved here) is that they're extremely  high calibre.

I also think there's quite a lot of friction in getting LTFF funding  - it takes quite a long time to come through I think is the main one. I think there are quite large benefits to being able to unilaterally decide to do some project and having the funding immediately available to do it. 

Yeah this seems right.

I think I don't understand the point you're making with your last sentence. 

Yeah, I'm pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention "changing community building in x way will improve EA community epistemic" I have a strong prior that it has no effect because most interventions people try to have no or small  effect (see famous graph of global health interventions.) 

I think the following are some examples of places where you'd think people would have good intuitions about what works well but they don't 

  • Parenting. We used to just systematically abuse children and think it was good for them (e.g denying children the ability to see their parents in the hospital). There's a really interesting passage in Invisible China where the authors describe loving grandparents deeply damaging the grandchildren they care for by not giving them enough stimulation as infants. 
  • Education. It's really really hard to find education interventions which work in rich countries. It's also interesting that in the US there's lots of opposition from teachers over teaching phonics despite it being one of the few rich country education interventions with large effect sizes (although it's hard to judge how much of this is for self-interested reasons)
  • I think it's unclear how well you'd expect people to do on the economics examples I gave. I probably would have expected people to do well with cash transfers since in fact lots of people do get cash transfers (e.g pensions, child benefits, inheritance) and do ok with minimum wage since at least some fraction of people have a sense of how the place they work for hires people. 
  • Psychotherapy. We only good treatments that worked for specific mental health conditions (rather than to generally improve people's lives, I haven't read anything on this) other than mild-moderate depression when we started doing RCTs. I'm most familiar with OCD treatment specifically and the current best practice was only developed in the late 60s. 

I suppose I think the example I gave where someone I know doing selections for an important EA program didn't include questions about altruism because they thought that adverse selection effects were sufficiently bad. 

Maybe, I meant to pick examples where I thought the consensus of economists was clear (in my mind it's very clearly the consensus that having a low minimum wage has no employment effects.) 

I completely stand by the minimum wage one, this was the standard model of how labour markets worked until like the shapiro-Stiglitz model (I think) and is still the standard model for how input markets work, and if you're writing a general equilibrium model you'll probably still have wage = marginal product of labour. 

Meta-analysis find that minimum wage doesn't increase unemployment until about 60% of median wage, and most economists don't agree that a even a $15 an hour minium wage would lead to substantial unemployment (although many are uncertain)

I think one of my critiques of this is that I'm very sceptical that strong conclusions should be drawn from any individual's experiences and those of their friends. My current view is that we just have limited evidence for any models of what good and bad community building looks like and the way to move forward is do try a wide range of stuff and do what seems to be working well.

I think I mostly disagree with your third paragraph. The assumptions I see here are:

  1. Not being very truth seeking with new people will either select for people who aren't very critical or will make people who are critical into not critical people 
  2. This will have second order effects on the wider community epistemics specifically in the direction of less critiques of EA ideas

i.e it's not obvious to me it makes EA community epistemics worse in the sense that EAs make worse decisions as a result of this. 

Maybe these things are true or maybe they aren't. My experience has not been this ( for context have been doing uni group cb for 2 years) the sorts of people who get excited about EA ideas and get involved are very smart, curious people who are very good critical thinkers.

But in the sprit of the post what I'd want to see are some regressions, like I'd want to see some measure of if the average new EA at a uni group which doesn't cb in a way that strongly promotes a kind of epistemic frankness are less critical of ideas in general than an appropriate reference class. 

Like currently I don't talk about animal welfare when first talking to people about EA because it's reliably the thing which puts the most people off. I think the first order effect of this is very clear - more people come to stuff - and my guess is that there are ~no second-order effects. I want to see some systematic evidence that this would have bad second order effects before I give up the clearly positive first order one. 

I agree I think this second part isn't intuitive to most people. I was using intuitive somewhat loosely to mean based on intuitions the person making the argument has.

Load more