Raemon

Comments

Proposed Longtermist Flag

Oh man, this is pretty cool. I actually like the fact that it's sort of jagged and crazy.

What I learned from working at GiveWell

This was among the most important things I read recently, thanks! (Mostly via reminding me "geez holy hell it's really hard to know things.")

Mentorship, Management, and Mysterious Old Wizards

That is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.

Dealing with Network Constraints (My Model of EA Careers)

https://forum.effectivealtruism.org/posts/JJuEKwRm3oDC3qce7/mentorship-management-and-mysterious-old-wizards

AMA: Elizabeth Edwards-Appell, former State Representative

In another comment you mention:

(One example would be the high levels of self-censorship required.)

I'm curious what the mechanism underlying the "required-ness" is. i.e. which of the following, or others, are most at play:

  • you'd get voted out of office
  • you'd lose support from your political allies that you need to accomplish anything
  • there are costs imposed directly on you/people-close-to-you (i.e. stress)

A related thing I'm wondering is whether you considered anything like "going out with a bang", where you tried... just not self-censoring, and... probably losing the next election and some supporters in the meanwhile but also heaving some rocks through the overton window on your way out. 

(I can think of a few reasons that might not actually make sense, for either political or personal reasons, but am suddenly curious why more politicians don't just say "Screw it I'm saying what I really think" shortly before retiring)

Morality as "Coordination" vs "Altruism"

The issue isn't just the conflation, but missing a gear about how the two relate.

The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.

Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.

In particular, I was concretely assuming "torturing people to death is generally worse than lying." But, that's specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don't have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)

An argument for keeping open the option of earning to save

Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they're better oriented).

tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%. 

You have more than one goal, and that's fine

I recently chatted with someone who said they've been part of ~5 communities over their life, and that all but one of them was more "real community" like than the rationalists. So maybe there's plenty of good stuff out there and I've just somehow filtered it out of my life.

Dealing with Network Constraints (My Model of EA Careers)

Alas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation."

I wrote this Question Post to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.

Dealing with Network Constraints (My Model of EA Careers)

Currently, there's only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.

I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)

I personally moved to the Bay about 2 years ago and shortly thereafter joined the LessWrong team, which at the time was just two people, and is now five. I can speak more to this example. At the time, it mattered that Oliver Habryka and Ben Pace already knew me well and had a decent sense of my capabilities. I joined while it was still more like "a couple guys building something in a garage" than an official organization. By now it has some official structure.

LessWrong has hired roughly one person a year for the past 3 years.

I think "median EA" might be a bit of a misnomer. In the case of LessWrong, we're filtering a bit more on "rationalists" than on EAs (the distinction is a bit blurry in the Bay). "Median" might be selling us a bit short. LW team members might be somewhere between 60-90th percentile. (heh, I notice I feel uncomfortable pinning it down more quantitatively than that). But it's not like we're 99th or 99.9th percentile, when it comes to overall competence.

I think most of what separates LW team members (and, I predict, many other people who joined early-stage orgs when they first formed), was a) some baseline competence as working adults, and b) a lot of context about EA, rationality and how to think about the surrounding ecosystem. This involved lots of reading and discussion, but depended a lot on being able to talk to people in the network who had more experience.

Why is it rate limited?

As I said, LessWrong only hires maybe 1-2 people per year. There are only so many orgs, hiring at various rates.

There are also only so many people who are starting up new projects that seem reasonably promising. (Off the top of my head, maybe 5-30 existing EA orgs hiring 5-100 people a year).

One way to increase surface area is for newcomers to start new projects together, without relying on more experienced members. This can help them learn valuable life skills without relying on existing network-surface-area. But, a) there are only so many projects ideas that are plausibly relevant, b) newcomers with less context are likely to make mistakes because they don't understand some important background information, and eventually they'll need to get some mentorship from more experienced EAs. Experienced EAs only have so much time to offer.

Load More