5 comments, sorted by Click to highlight new comments since: Today at 5:22 PM
New Comment

More EAs should give rationalists a chance

My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.

I now think that I just met the wrong rationalists early on. The rationalists that I most admire:

  • Care deeply about their values
  • Are careful reasoners, and actually want to work out what is true
  • Are able to disentangle their views from themselves, making meaningful conversations much more accessible
  • Are willing to seriously consider weird views that run against their current views

Calling yourself a rationalist or EA is a very cheap signal and I made an error early on (insensitivity to small samples sizes etc.) dismissing their community. Whilst there is still some stuff that I would  change, I think that the median EA could move several steps in a ’rationalist’ direction.

Having a rationalist/scout mindset + caring a lot about impact are pretty correlated with me finding someone promising. It’s not essential to having a lot of impact but I am starting to think that EA is doing the altruism (A) part of EA super well and the rationalist are doing the effective (E) part of EA super well. 

My go to resources are probably:

  • The scout mindset - Julia Galen
  • The codex - Scott Alexander
  • The sequences highlights - Eliezer Yudkowsky/Less Wrong
  • The Less Wrong highlights

I adjust upwards on EAs who haven't come from excellent groups

I spend a substantial amount of my time interacting with community builders and doing things that look like community building.

It's pretty hard to get a sense of someone's values, epistemics, agency .... by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.

I worry that there are some people who were introduced to some set of good ideas first, and then people use this as a proxy for how good their reasoning skills are. On the other hand, it's pretty easy to be in an EA group where people haven't thought hard about different cause areas/interventions/... And come away with the mean take that's not very good despite being relatively good reasoning wise.

When I speak to EAs I haven't met before I try extra hard to get a sense of why they think x and how reasonable a take that is, given their environment. This sometimes means I am underwhelmed by people who come from excellent EA groups, and impressed by people who come from mediocre ones.

You end up winning more Caleb points if your previous EA environment was 'bad' in some sense, all else equal.

(I don't defend why I think a lot of the causal arrow points from the EA environment quality to the EA quality - I may write something on this, another time.)

It's all about the Caleb points man

‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one

I sometimes see criticisms around

  • EA is too elitist
  • EA is too focussed on exceptionally smart people

I do think that you can have a very outsized impact even if you're not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.

A few quick claims that push towards focusing on attracting outliers:

  • The main problems that we have are technical in nature (particularly AI safety)
  • Most progress on technical problems historically seems to be attributable to a surprisingly small set of the total people working on the problem
  • We currently don't have a large fraction of the brightest minds working on what I see as the most important problems

If you are more interested in neartermist cause areas I think it's reasonable to place less emphasis on finding exceptionally smart people. Whilst I do think that very outliery-trait people have a better shot at very outliery impact, I don't think that there is as much of an advantage for exceptionally smart people over very smart people.

(So if you can get a lot of pretty smart people for the price of one exceptionally smart person then it seems more likely to be worth it.)

This seems mostly true to me by observation, but I have some intuition that motivates this claim.

  • AIS is a more novel problem than most neartermist causes, there's a lot of working going in to getting more surface area on the problem as opposed to moving down a well defined path.
  • Being more novel also makes the problem more first mover-y so it seems important to start with a high density of good people to push onto good trajectories.
  • The resources for getting up to speed on the latest stuff seemless good than in more established fields.

(crosspost of a comment on imposter syndrome that I sometimes refer to)

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people ...) and trust the EA community to be tracking some of the right things. 

I find it helpful to sometimes be in a mindset of "helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).