Hide table of contents

Followup to Dealing with Network Constraints

Epistemic Status: I spent some time trying to check if Mysterious Old Wizards were important, and reality did not clearly tell me one way or another. But, I still believe it and frequently reference it and figured I should lay out the belief.


Three bottlenecks that the EA community faces – easily mistaken for each other, but with important differences:

Mentorship – People who help you learn skills, design your career, and gain important context about the EA landscape that help you figure out how to apply those skills.

Management – Within a given org or existing hierarchy, someone who figures out what needs doing and who should do it. This can involve mentorship of employees who are either new, or need to train in new skills.

Finally, what I call Mysterious Old Wizards – Those who help awaken people's ambition and agency.

I mention all three concepts to avoid jargon confusion. Mysterious Old Wizards are slightly fungible with mentors and management, but they are not the same thing. But first, let's go over the first two.

Mentorship and Management Bottlenecks

Mentorship and Management are (hopefully) well understood. Right now, my guess is that management is the biggest bottleneck (with mentorship a close second). But this doesn't mean there's any obvious changes to make to our collective strategy.

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

A top-tier mentor with lots of skills and context can help ensure someone thinks through lots of relevant considerations, or direct them in the most useful ways. A medium-tier mentor is more likely to be misguided about some things, or missing some context.

A newcomer to the field who's just read the obvious blogposts might be able to help a newer-comer learn what to read, but there's going to be a lot of stuff they don't know.

A lot of EA content is subtle and detailed, and easy to accidentally compress into something misleading. (For example, 80k might write a nuanced article saying "You should focus on talent gaps, not funding gaps", but this gets translated into "EA is talent constrained", and then people repeat that phrase without linking to the article, and then many people form an inaccurate belief that EA needs "pretty talented people", rather than "EA needs very specific talents that are missing.")

I think the way to grow mentorship and management capacity involves longterm planning and investment. There isn't free resources lying around we can turn into mentorship/management. You can invest in mentoring people who grow into new mentors later, but it takes awhile.

I think there is room to improve EA mentorship. But it's a fairly delicate problem, that involves re-allocated resources that are currently being spent fairly carefully.

Mysterious Old Wizards

"I'm looking for someone to share in an adventure"

In The Hobbit, Bilbo Baggins wakes up one day to find Gandalf at his door, inviting him on a quest.

Gandalf does not teach Bilbo anything. He doesn't (much) lead the adventuring party, although he bales them out of trouble a few times. Instead his role in the story is to believe in Bilbo when nobody else does, not even himself. He gives Bilbo a bit of a prod, and then Bilbo realizes, mostly on his own, that he is capable of saving lives and outwitting dragons.

In canon Harry Potter, Dumbledore plays a somewhat similar role. In the first five books, Dumbledore doesn't teach Harry much. He doesn't even give him quests. But a couple times a year, he pops in to remind Harry that he cares about Harry and thinks he has potential.

Showing up and Believing in You

Some people seem to be born ambitious and agentic. Or at least, they gain it fairly early on in childhood.

But I know a fair number of people in EA who initially weren't ambitious, and then at some point became so. And anecdotally, a fair number of those people seem to have had some moment when Someone They Respected invited them out to lunch or something, sat them down and said "Hey, what you're working on – it's important. Keep doing it. Dream bigger than you currently are allowing yourself to dream."

This is often accompaniment with some advice or mentorship. But I don't think that's always the active ingredient.

The core elements are:

  • The wizard is someone you respect. They clearly have skills, competence or demonstrated success such that you actually take their judgment more seriously than your own.
  • The wizard voluntarily takes time out of their day to contact you and sit down with you. It might only be for an hour. It's not just that you went to them and asked "do you believe in me?". They proactively did it, which lends it a costly signal of importance.
  • They either tell you that the things you are doing matter and should invest a lot more in doing them. Or, maybe, they tell you you're wasting your talents and should be doing something more important. But, they give you some sense of direction.

Network Bottlenecks

I think all three types of are in short supply, and we have a limited capacity to grow the resource. But one nice thing about mysterious old wizards is that they don't have to spend much time. Mentorship and management requires ongoing investment. Mysterious Old Wizards mostly make you go off and do the work yourself.

In my model, you can only mysterious-old-wizard for people who respect you a lot. I wouldn't go around trying to do it frivolously. It ruins the signal if it turns into a formality that people expect you to do. But, I do think people should be doing it more on the margin.

Comments4


Sorted by Click to highlight new comments since:

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

Slight push back here in that I've seen plenty of folks who make good mentors but who wouldn't be doing a lot of mentoring if not for systems in place to make that happen (because they stop doing it once they aren't within whatever system was supporting their mentoring), which makes me think there's a large supply of good mentors who just aren't connected in ways that help them match with people to mentor.

This suggests a lot of the difficulty with having enough mentorship is that the best mentors need to not only be good at mentoring but also be good at starting the mentorship relationship. Plenty of people, it seems though, can be good mentors if someone does the matching part for them and creates the context between them and the mentees.

That is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.

This seems like a useful concept to have.

FWIW, I think something akin to a mysterious old wizard was relevant in my EA-aligned career journey. 

The way I've been phrasing it is that, once I got clear indications that I was likely to be offered a research role at an EA org (Convergence Analysis), I felt like I'd gotten a "stamp of approval" saying it now made sense for me to make independent posts to the Forum and LessWrong as well. I still felt uncertain about whether I'd have anything to say that was worth reading and wasn't just reinventing the wheel, whether I'd say it well, whether people would care, etc., but I felt much less uncertain than I had just before that point.[1] 

So maybe regular, formal job/project application processes already do a lot of the work we'd otherwise want mysterious old wizards to do? 

But I still think there's room for mysterious old wizards, as you suggest. I've tried to fill a mild (i.e., caveated) version of this role for a couple people myself.

[1] My data point is a bit murkier than I made it sound above, for reasons such as the following:

  • I had already started drafting a post that was related to the first post I ended up actually posting
    • Though I still think the indications of a likely job offer probably brought forward the date I started posted, and increased how many posts I ended up writing around then (shooting for a sequence right away, rather than just one exploratory post)
  • I had also been offered an operations role at a high-status EA org at the same time, which also provided some degree of "stamp of approval"
  • I also had various other "stamp of approval"-ish things around the same time, e.g. from conversations at an EAG and an EAGx
  • I'd set the goal to "get up to speed" to a certain extent, and then started posting things, and if I recall correctly I'd felt that the first part should last me most of 2019 and I indeed felt I'd basically completed that part by the end of 2019. So that probably also caused me to switch into a mode of "alright, let's actually start posting now". 

I realize this will sound crazy, but: 

  • Maybe bad mentors are even more important than good mentors

A good mentor will tell you smart things, you'll follow them, see good results and maybe think, "Wow! I'm so lucky to have a good mentor. I'll ask them about X, Y and Z." This reinforces the mentor-mentee dependency cycle

A bad mentor will tell you stupid things, you'll follow them, see terrible results and hopefully think, "Wow! That mentor was terrible. I'll ask someone else about X, Y and Z." This frees up the bad mentor to "help" others.

 A bad mentor who believes in you, but provides terrible advice is perhaps a Mysterious Old Wizard. A more common situation is a loving, kind parent or wonderful friend who believes in you more than you believe in yourself!

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.