Alas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation."
I wrote this Question Post to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.
Currently, there's only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.
I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)
I personally moved to the Bay about 2 years ago and shortly thereafter joined the LessWrong team, which at the time was just two people, and is now five. I can speak more to this example. At the time, it mattered that Oliver Habryka and Ben Pace already knew me well and had a decent sense of my capabilities. I joined while it was still more like "a couple guys building something in a garage" than an official organization. By now it has some official structure.
LessWrong has hired roughly one person a year for the past 3 years.
I think "median EA" might be a bit of a misnomer. In the case of LessWrong, we're filtering a bit more on "rationalists" than on EAs (the distinction is a bit blurry in the Bay). "Median" might be selling us a bit short. LW team members might be somewhere between 60-90th percentile. (heh, I notice I feel uncomfortable pinning it down more quantitatively than that). But it's not like we're 99th or 99.9th percentile, when it comes to overall competence.
I think most of what separates LW team members (and, I predict, many other people who joined early-stage orgs when they first formed), was a) some baseline competence as working adults, and b) a lot of context about EA, rationality and how to think about the surrounding ecosystem. This involved lots of reading and discussion, but depended a lot on being able to talk to people in the network who had more experience.
Why is it rate limited?
As I said, LessWrong only hires maybe 1-2 people per year. There are only so many orgs, hiring at various rates.
There are also only so many people who are starting up new projects that seem reasonably promising. (Off the top of my head, maybe 5-30 existing EA orgs hiring 5-100 people a year).
One way to increase surface area is for newcomers to start new projects together, without relying on more experienced members. This can help them learn valuable life skills without relying on existing network-surface-area. But, a) there are only so many projects ideas that are plausibly relevant, b) newcomers with less context are likely to make mistakes because they don't understand some important background information, and eventually they'll need to get some mentorship from more experienced EAs. Experienced EAs only have so much time to offer.
I expect to want to link this periodically. One thing I could use is clearer survey data about how often volunteering is useful, and when it is useful almost-entirely-for-PR reasons. People often are quite reluctant to think volunteering isn't useful will say "My [favorite org] says they like volunteers!". (My background assumption is that their favorite org probably likes volunteers and needs to say so publicly, but primarily because of long-term-keeping-people-engaged reasons. But, I haven't actually seen reliable data here)
I just donated to the first lottery, but FYI I found it surprisingly hard to navigate back to it, or link others to it. It doesn't look like the lottery is linked from anywhere on the site and I had to search for this post to find the link again.
The book The Culture Map explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences.
In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)
Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a higher-ranking asking direct questions of lower-ranking people, the boss can ask a team of lower-ranked people to work together to submit a proposal, where "who exactly criticized which thing" is a bit obfuscated.
Tying in a bit with Healthy Competition:
I think it makes sense (given my understanding of the folk at 80k's views) for them to focus the way they are. I expect research to go best when it follows the interests and assumptions of the researchers.
But, it seems quite reasonable if people want advice for different background assumptions to... just start doing that research, and publishing. I think career advice is a domain that can definitely benefit from having multiple people or orgs involved, just needs someone to actually step up and do it.
Nod. I had "more experimentation" as part of what I meant to imply by "diversity of worldviews" but yeah it's good to have that spelled out.
This certainly seems like a viable option. I agree with the pros and cons described here, and think it'd make sense for local groups to decide which one made more sense.
My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations)
My background assumption is that it's important to grow the number of people who can work fulltime on grant evaluation.
Remember that Givewell was originally just a few folk doing research in their spare time.