This is a special post for quick takes by Raemon. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Originally this was a thread for coordinating conversations at EA Global 2019. In the end, only I ended up using the thread for top level comments, and in fact it turned out that a lot of the value was getting to quickly hash out some ideas that I hadn't felt ready to turn into fully fledged posts.
I'll probably continue using this for EA-related shortform posts, as a parallel to my lesswrong shortform feed.
Mid-level EA communities, and cultivating the skill of thinking
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you've read all the introductory content, but before you're ready to tackle anything real ambitious... what should you do, and what should your local EA community encourage people to do?
My sense is that grassroots EA groups default to "discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory."
I have varying opinions on those things, but even if they were all good ideas... they leave an unsolved problem where there isn't a very good "bread and butter" activity that you can do repeatedly, that continues to be interesting after you've learned the basics.
My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like "figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it's okay if to not do a very good job because you're still learning."
I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:
I'm interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.
Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they're supposed to do
So I actually draw an important distinction between "mid-level EAs", where there's three stages:
"The beginning of the Middle" – once you've read all the basics of EA, the thing you should do is... read more things about EA. There's a lot to read. Stand on the shoulders of giants.
"The Middle of the Middle" – ????
"The End of the Middle" – Figure out what to do, and start doing it (where "it" is probably some kind of ambitious project).
An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong.
(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)
So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don't yet have the right meta skills to do that.
Ah.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they're looking for someone to manage them.
Other people are happy to do things on their own, but they don't have the necessary skills and experience, so they will end up doing something that's useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.
Meanwhile... "sufficiently advanced thinking looks like doing", or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.
I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn't actually rise to the level of "thinking for real." Thinking for real is real work.
Hmm, it's not so much the classic rationalist trait of overthinking that I'm concerned about. It's more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of "practicing thinking". If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can't let your brain know that that's what you're trying to achieve.
Second, "thinking for real" sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you'll waste time on producing research which looks nice and impressive and all that, but in the end doesn't help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of "doing research-assistant-kind-of-work for other people who know what they're doing" and gradually work their way up to "being one of the people who know what they're doing", that would make this work.
You wouldn't be "practicing thinking"; you could easily convince your brain that you're actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you're working on is for real.
And, by the same token, you'd be working on something that (someone believes) needs to be done. And maybe sometimes you'd realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here's why, etc.—and that's how you'd gradually grow to be one of the people who know what they're doing.
So, yeah, proceed on that, I guess.
Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:
(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
I'll take your invitation to treat this as an open thread (I'm not going to EAG).
Why not tackle less ambitious goals?
What goals, though?
How about volunteering for an EA org?
Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn't scale. There are communities and movements that are designed such that there's lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don't think EA is one of them.
I've heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.
I agree that EA does not have 1000 volunteer jobs. However, here is a list of some possibilities. I know ALLFED could still effectively utilize more volunteers.
My claim is just that "volunteer at an org" is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn't to say volunteers aren't valuable, or that many EAs shouldn't explore that as an option, or that better coordination tools to improve the situation shouldn't be built.
But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said "huh, it looks like there should be all this free labor available by passionate people, can't we connect these people with orgs that need volunteers?" and tried to build some kind of tool to help with that, it turned out that most people aren't actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.
My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.
(Again, not arguing that ALLFED shouldn't look for volunteers or that EAs shouldn't volunteer at ALLFED, esp. if my experience doesn't match yours. I'd encourage anyone reading this who's looking for projects to give ALLFED volunteering a look.)
The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they're either young and lacking some core "figure out how to be helpful and actually help" skills, or they're older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.
I think the *End* of the Middle of the funnel is more of where "volunteer at EA orgs" makes sense. And people in the Middle of the Middle who think they have the "figure out how to be helpful and help" property should do so if they're self-motivated to. (If they're not self motivated they're probably not a good volunteer)
Competition in the EA Sphere
A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (both financial, coordinational and human-talent) that that's less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
I'm interested in chatting with people about the nuts and bolts of how to apply this.
A few reasons for I think competition is good:
There are some special caveats here:
Integrity, Accountability and Group Rationality
I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.
Some background reading for my current thoughts includes habryka's post on Integrity and my own comment here on competition.
What about Paul's Integrity for Consequentialists?
Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.
Grantmaking and Vetting
I think EA is vetting constrained. It's likely that I'll be involved with a new experimental grant allocation process. There are a few key ingredients here that are worth discussing:
Hey Raemon - I run the EA Grants program at CEA. I'd be happy to chat! Email me at nicole.ross@centreforeffectivealtruism.org if you want to arrange a time.
I won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.
I'd offer that whatever you can do to make it possible to iterate on your grantmaking loop quickly will be useful. Perhaps starting with smaller grants on a month or even week cycle, running a few rounds there, and then scaling up. Don't try and make it near-perfect from the start, instead try and make it something that can become near-perfect because of iterations and improvements.
I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.
Gotcha. I wonder whether it could create substantially more impact from doing over the long term yourself, or setting it up well for someone else to run long term. Obviously I have no context and your goals on the project but I've seen things where people do a short term project aiming for impact creation and where in the end they feel that they could've created much more impact by doing the thing in a more ongoing manner. So this note may or may not be relevant depending on the project and your goals :)
Notes from a "mini talk" I gave to a couple people at EA Global.
Local EA groups (and orgs, for that matter) need leadership, and membranes.
Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.
Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it's worth people's effort to overcome the barriers to entry, and/or maintain that barrier.
Membranes
A membrane is a semi-permeable barrier that things can enter and leave, but it's a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.
An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.
(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)
What happens inside the membrane?
Membranes can work via two mechanisms:
The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).
If you're much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.
On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of "so-and-so seemed not super promising" but if you'd given them a chance to grow it would have been fine.