I support people to move forward on meaningful and important project by listening to their wisdom instead of worry. When you know what you should do, but aren't sure why you aren't, I can help you clear the way.
You may be coming across my profile because of my involvement with the EA Hiring Agency, High Impact Recruitment (HIRE). If you're looking for hiring help, please contact Neil Ferro firstname.lastname@example.org. He can help!
My coaching training and style cross a few areas:
-Modern neuroscience of how to be aware of and change your mind using tools like self-observation, body-awareness, mindfulness and self-hypnosis**.
-Ancient wisdom principles common to stoicism, Buddhism, and other traditions that focus on self-agency and integrity.
-Systems thinking: looking beyond yourself to see how you are connected to the rest of the world, so you can decide where to focus your attention to change the world for the better.
I offer 1:1 coaching and occasional trainings and group workshops! check out my website: leemcc.com
Thanks for writing this up, and as someone who is on the margins of Quaker culture in the Philadelphia area (a central hub of Quakerism in the US), I thought this was interesting to look at. I think that EA ideas and Quaker values do have some amount of modern overlap which would be potentially useful to look at further.
I have a comment on nonviolence:
You write: "They were non-violent, considered, calm but principled. They had beliefs that were well constructed, well founded and considered - and beliefs they held strongly to, but never violently."
Something to add, for context about Quakers -- while they are deeply committed to nonviolence, historically this was not always calm, and the meaning of nonviolence held and still holds a good deal of plurality within it.
For example, Quakers were ahead of the moral curve on some things like abolition of slavery.
However, we should be careful not to believe that this was a monolithic view strongly held within the community from the outset. It took time and advocacy within Quakers to see it this way. Benjamin Lay, one of the first Quakers seriously outspoken against slavery in the states was extremely passionately outspoken, seriously shamed his fellow Quakers who held slaves and, as I have heard, was indeed kicked out of multiple meetings for his passionate exhortations.
Benjamin Lay was nonviolent but he was he was not calm. His tactics of the time involved such things as spraying fake-blood on fellow meeting goers who held slaves and laying infront of the meeting house door so that people would need to step on him to exit.
He isn't representative of the median Quaker, of course, but I think in reasoning out the origins of Quaker abolitionism, it was likely that some tactics like Lay's were instrumental to getting the rest of the mass of the group to pay attention and start taking different kinds of actions. There are other Quakers in the tradition of taking "radical" action to spark change or push the group, such as Mary Dyer who was hanged after advocating for religious freedom.
I think the answer the question of "How did they best ensure they were on the right-side of the moral curve?" involves at least some amount of "there were people with radical views who expressed them extremely strongly and vividly in ways that caught attention and sparked conversation and action among the masses" AND because this religion believes that God speaks through people, sometimes passionately, people with strong and consistent views were often taken seriously in their moral convictions.
That's a bit of a rough/crude take, and is no where near complete. My bottom line is to encourage people not conflate "nonviolent" with "dispassionate", "calm" or "measured" when it comes to historical Quakers.
I agree. In my advice giving, especially to college students and recent grads, I lean the same way. I find that people can develop a sense of the aptitudes they align with through experiences in a variety of realms (through non EA-related activities, clubs, jobs, school work), which increases the opportunities for data input and experimentation.
Thank you for this article, Pia!
I agree that as EA grows, I believe there will be an increase in demand for recruiters (in-house to organizations, and external in different niches/cause areas). And I also have observed that there are not many people with prior experience in recruiting within the EA ecosystem, but many people could have the skills and aptitudes that could make them a good recruiter.
Recruiting is also a relatively flexible career, with opportunities in many sectors all around the world!
I hope that people who are hiring for recruiters within EA make it explicit when they need someone with past experience, and when they're able to provide training/mentorship to support someone who could excel in the role with the right supports.
Travel 🌎🌴 +Longtermism ❌🖇🦠💣 Job!
Nonlinear is hiring an Executive Assistant/Operations Manager.
Salary $60-100K USD, depending on experience.
You might be a great fit if…
Click here for more and apply by July 21.
Nonlinear is looking at people with a variety of backgrounds and experiences. We hope you’ll err on the side of applying!
I am working on setting up an EA recruiting agency, and through dozens of conversations with people in the last month, I can confirm that there is a wide variety of opportunities and needs! To add and expand on some listed, I made some bullet points below. These ideas aren’t deeply considered yet but point to some possible opportunities from what I’ve been hearing.
HR/ Hiring Support for Organizations
* Helping new and rapidly growing orgs improve their hiring.
* Resources for designing work tests, interview questions, screenings, etc, even for established orgs.
* Legal resources and consulting, especially around work visas and labor laws in different countries for global orgs
Coaching/Support for Job Seekers
* More in-depth career coaching, including accountability and encouragement through a whole job search.
* Group workshops on career planning.
* More counseling and support for people in specific fields and interests.
Mentorship of Current Job Holders
* Potential for more professional organizations/mentorship for people who are hired into roles that are new for them. A lot of EA orgs are relatively small and may not have built-in mentorship from people more senior— other ways to set up professional growth and development opportunities.
Niche Recruitment. There’s some happening in each of these already - but certainly room for continued work and expansion into other fields.
* AI Safety, especially people with experience (there is some energy around this)
* Mid-career professionals from outside EA (this is getting off the ground with EA Pathfinder)
* People with Operations... and often more Administration experience (I don't know of specific recruitment or career coaching around this)
* Development Economists and other academic niches for research positions
On the topic of "Some factors in how people influence each other"
I've seen the word "rank" used to help talk about perceived power in other spaces. Saying "they have rank" means that their voice or perspective is given more weight and credence, which may have nothing to do with a person's title and everything to do with the "factors in how people influence each other".
The short-hand of "rank" can help explain the "shifting sands" experience of people changing status over time. Rank can be held through the transitions of switching jobs and is also always contextual to the people in the room.
For example, you might imagine that there's an organization where a longstanding volunteer has significantly more rank than the recently-hired new director of the organization. The volunteer knows who is who, how "things get done around here", has institutional memory, etc. The new director would be wise to notice that the volunteer has a lot of rank in the organization, try to get to know them, and get mentored by them, in order for the director to build their own rank in the organization (unless the director is coming in to change direction, in which case they'll need to bring or build their rank in other ways). If the director is blind to this and thinks they'll have a lot of power just based on their title, they're likely to have a rude awakening in several months when they hit conflicts or can't seem to get things done.
There's nothing wrong with rank. Many left-leaning spaces (especially social justice-oriented) and more broadly spaces made of "polite" people often try to pretend that "we're all equals" or "non-hierarchical" which is usually quite false. It can be very detrimental when rank isn't acknowledged or is taboo to talk about. When we pretend that things are flat, people can't see rank as well. People with less rank don't understand why they can't get things done and how to be more effective when they're supposed to be equal to everyone else. People with rank don't know they need to invite others to give their opinions and welcome contradiction, or somewhat conversely, don't step into being effective leaders for fear of dominating when actually many people might want them to speak up more!
So, thank you Julia for this post! Noticing these dynamics can help people navigate them much easier, build leadership, and support better mentorship.
I'm Lee and one of the people who is now working on this project. If you come across this post and are have thoughts or ideas about hiring in EA, reach out to me (calendly link is on my bio).
I'm looking to connect with people who:
-Have experience hiring within EA
-Have wishes or suggestions for ways you think EA orgs could do hiring better
-Have been a personal assistant
-Have hired or attempted to hire a personal assistant
-Have outside experience with hiring best practices that you think should be implemented more often
I can't talk to too many people right now, so don't hesitate to connect!
Thanks for this post! I am actively working on improving hiring for EA, especially to support longtermist projects, and appreciate this summary of some key best practices.
I’m currently focusing particularly on roles that are more common, such as personal assistants, where there is a high probability of replicability of the hiring process. The challenge is more on the end of “it is easy for us to find people to do the job”, where there is a strong need for filtering. This might be able to be simplified (in some ways you mention, like strong parameters/ a quick quiz) or being outsourced to an org (like what I’m starting, possibly). While there are unlikely to be many "really-not-a-fit" candidates, the challenge seems to be sorting the "okay" from the "exceptional" and ensuring good work style and workplace culture alignment between the hire and their manager.
I currently have some of the same “topics for further investigation” written down as possible interventions to experiment with. For example, is there a need/demand for interviewing training, or a bank of work sample questions to make hiring processes easier for organizations and higher quality? The creation of these processes, especially without relevant examples to work from, is challenging and time-consuming from scratch!
I’d be interested to talk with you more about your experience and see if there’s an opportunity to collaborate on developing these kinds of things in the next few months.
Thanks post this post! Seeing how many global challenges are in a sense alignment problems also brought me on board with understanding AI Safety. Climate change and social media are good touchstones for what I think of as social/political alignment issues.
I don't know if this is exactly correct (so someone help me if I'm off base) but I find the AI alignment issue especially mentally complex to wrap my head around because it doesn't seem like we have good solutions yet at almost any level of technical or social/political alignment. Here's how I think of them in my head:
technical alignment: can we have an inconceivably smart optimizing machine follow what we really want it to do in order to benefit us, vs taking the letter of its programming down paths that would be bad. Can we look into the black box to know what the heck is going on, so that we can stop it if needed.
social/political alignment: Can we as humans create and uphold fair and effective rules of regulation on power that are effective in a globalized economy without a strong world government. Can we design laws and social norms that prevent catastrophe when more and more people and businesses have access to access to increasingly powerful machines that do what they are asked (blow people up if you want them to with enormous accuracy) and have unintended side effects (influencing elections through social media algorithms).
With AI we don't have either. It is sort of as if runaway climate change were happening and we didn't yet understand that CO2 was part of the root cause or something.
The fact that a lot of x-risk issues share common threads in the social-political alignment sphere to me is interesting, and is one of my main arguments for why EA-ers should pay more attention to climate change. It seems to share some of the global game-theory elements to other issues like pandemics and AI regulation, and work on x-risks as a whole may be stronger if there is a lot of cross-pollination of strategies and leanings, ESPECIALLY because climate change is less neglected and has had some amount of progress in recent decades.