Hide table of contents

People often set up 1:1 meetings at EA conferences, or with other people in their local communities, or as part of an EA residency, or through EA Pen Pals.

But I can't think of an easy way for people in EA to find 1:1 meetings on topics of their choice with other people who are also interested in those topics.

So I'm setting up this thread, though I wouldn't be surprised if I'm forgetting a better option or if someone else sets one up soon.


Suggested format for answers:

  • Who are you?
  • What are some things people can talk to you about? (e.g. your areas of experience/expertise)
  • What are things you'd like to talk to other people about? (e.g. things you want to learn)
  • How can people get in touch with you?


If you have additional suggestions for this question, please leave a comment! I'm happy to edit the original post, and I'm sure it can be optimized further.

New Answer
New Comment

12 Answers sorted by

Who are you?

I'm Aaron! I work at CEA! You can learn more in my Forum bio. (You should consider making a bio, too.)


People can talk to me about:

  • Writing a Forum post, or getting one reviewed before publication
  • Writing and editing other EA content
  • Finding work as a writer in an EA-aligned role
  • Communicating with the public about EA (though I may refer you to my colleagues in some cases)

I also read a lot of stuff for my job. If you want to find EA content on a specific topic, there's a good chance I can help.

Finally, I've had some recent professional success with Magic: the Gathering, and I routinely chat with people about that game, so you're welcome to ask questions about Magic if you're one of the many EA folks who moonlights as a planeswalker.


I'd like to talk to other people about:

  • I'm very interested in questions about EA community growth: How much of it should there be? How do we avoid the pitfalls of fast expansion? What groups of people should we be reaching that we aren't?
    • If you have a lot of experience introducing people to EA or certain concepts within it, I'd be curious to hear your takeaways.
  • Personal donation strategy. As a small donor, how much time should I spend considering my options? Are meta-features of my donation (e.g. publicizing it) possibly more impactful than the donation itself? Etc.
  • Communicating with the public about EA. How could the EA Newsletter be better? Who are some people who are doing a great job with EA social media content?


How to get in touch:

aaron@centreforeffectivealtruism.org (for questions you want answered in writing)

Calendly (if you want to set up a live meeting)

Who are you?

I'm Peter Hurford. Along with my friend Marcus A. Davis, I am a co-founder and co-Executive Director of Rethink Priorities. We do research, mainly on farmed animal welfare, wild animal welfare, longtermism (especially nuclear security), EA movement building, and mental health.


People can talk to me about:

I'd be interested in talking to anyone who works in areas related to our research or who might use our research. I'd also be interested in talking to folks considering a career in EA-related research.


Questions that personally interest me (may be different from RP's views as an org):

How could Rethink Priorities be better?

How do we best run an EA research organization?

How can we best grow the overall talent pool, skill, and output of EA researchers?

What should we research? How do we prioritize research?

How can we best learn with and interact with non-EA research institutions?

What can we do to best help nonhuman animals?

How can EAs positively shape political policies? Is this even a good idea?

How can we use the EA Survey, The Local Groups Survey, and related work to make a stronger EA community?

What should the EA community do with regard to longtermism?

What does the EA community lose by taking a longtermist attitude? Are there still benefits to shorttermist approaches of quick feedback loops, concrete measurement, etc.? Can we apply some of that to longtermism?

Are there any risks with the likelihood of completely and permanently eliminating humanity's future potential, other than Unfriendly AI?

Is mental health an important cause area for EAs to look into? What does prioritizing mental health teach us about measuring benefits?

What is the best way to measure and compare cost-effectiveness across a variety of different domains?


Other interesting things about me you may want to talk about:

I spent about seven years working full-time while doing a significant amount of EA volunteering. I switched into full-time EA research work a lot more recently and I have thoughts about how to transition.

I've thought a lot about the mechanics of running an org operationally.

I've thought a lot about the mechanics of individual productivity.

I've worked from home for more almost three years and have gotten pretty good at that.

I've been a full-time data scientist for five years and have thoughts about earning-to-give in tech.

I've been involved with EA since around 2012, so I've seen it change a lot.


How to get in touch:

Email peter@rethinkpriorities.org and I can try to either correspond via writing or set up a meeting.

Who are you?

Hey, my name is Saulius Šimčikas. I work as a researcher at an EA think tank Rethink Priorities, focusing on topics related to animals.

People can talk to me about:

  • Happy to answer questions to people who are new to EA, especially about animal welfare but also about cause prioritization, EA principles, community, etc.
  • I can help you with finding whatever effective animal advocacy research that would be useful to you
  • Anything related to topics I wrote about, which include:
  • If some animal-related research would help you to help animals, please tell me, and I or one of my colleagues may research it at some point.
  • Was there any research in the past that helped you to help animals?
  • Some people come to me if they want someone to give them an honest and frank opinion or feedback. I’m happy to be used this way.

I'd like to talk to other people about:

  • I’m always very interested in hearing professional and personal criticism. For example, please tell me if you think that:
    • I would have more impact if I was doing something different with my life,
    • I should write articles in a different way,
    • I should research different topics than I do,
    • I should do something differently when I communicate with people.
  • Theory of change for research. How can I make my research have more impact?
  • What the future of animal advocacy should look like?
  • Suggest me research topics that you think could end up making a big difference for animals.
  • I'm interested in how the work of various people in animal charities looks day-to-day
  • Tell me if you can put me in contact with someone who works in the egg industry (owning some backyard hens or a very small farm doesn’t count)

How to get in touch:

Who are you?

David Manheim, PhD in Public policy, working with both FHI's bio team, and a few other projects

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

I'm currently focused on global catastrophic biological risks, (I'm not interested in talking about COVID response or planning,) and systemic existential risks, especially technological fragility and systemic stability.

What are things you'd like to talk to other people about?

Definitions for AI forecasting - working on a project on this and hoping to hear more from people about where there is confusion or disagreement about the definitions that is making the discussion less helpful.

How can people get in touch with you?

Calendly, Twitter (DMs open!), or email, myfullnamenopunctuation@gmail.com (Note: I'm in Israel so note large differences in time zones.)

*updated May 3, 2021*

Who are you?

I'm Marisa Jurczyk :) I'll wrap up almost three years working in operations at Rethink Charity this June. I also co-organize EA Anywhere and have worked with ALLFED on social science research, with Democracy Policy Network on income security research, and with the IIDM working group on stakeholder engagement. Previously I graduated with a degree in sociology and business analytics in December 2019, volunteered with Students for High-Impact Charity, and worked at a nonprofit communications firm.

I'll be starting a Master in Public Policy degree at Georgetown University in Fall 2021, with an interest in improving institutional decision-making and global cooperation.

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

  • I talk to a lot of people about EA ops and getting a job at an EA org, but I generally see myself as a starting point for these conversations and will usually try to connect you with someone else who works more closely in the area you're interested in or has more experience.
  • My coursework and experience with nonprofit boards and nonprofit communications
  • Value drift in EA
  • Social science research techniques and applications for the social sciences in EA more broadly
  • I also love talking to other college students / recent grads, particularly those making big career decisions!

What are things you'd like to talk to other people about? (e.g. things you want to learn)

  • I'm excited about improving institutional decision-making, especially designing institutions that are both effective and benevolent. I'd love to talk with people who have similar goals and learn more about how you're thinking about this problem area.
  • I'm curious about careers working directly in US government (especially congresssional staffing), at think tanks, or in the UN, and would love to chat with people in any of theses career paths.
  • Also curious to hear other applications for the social sciences in EA that I haven't considered yet.

How can people get in touch with you?

Email me at marisajurczyk[at]pm[dot]me or schedule a time on my calendar!

Who are you?

I'm Richard. I'm a research engineer on the AI safety team at DeepMind.

What are some things people can talk to you about? (e.g. your areas of experience/expertise)

AI safety, particularly high-level questions about what the problems are and how we should address them. Also machine learning more generally, particularly deep reinforcement learning. Also careers in AI safety.

I've been thinking a lot about futurism in general lately. Longtermism assumes large-scale sci-fi futures, but I don't think there's been much serious investigation into what they might look like, so I'm keen to get better discussion going (this post was an early step in that direction).

What are things you'd like to talk to other people about? (e.g. things you want to learn)

I'm interested in learning about evolutionary biology, especially the evolution of morality. Also the neuroscience of motivation and goals.

I'd be interested in learning more about mainstream philosophical views on agency and desire. I'd also be very interested in collaborating with philosophers who want to do this type of work, directed at improving our understanding of AI safety.

How can people get in touch with you?

Here, or email: richardcngo [at] gmail.com (edited; having left DeepMind, I now no longer have access to my company email. If you emailed me there recently, please resend to this one!)

Who are you?

Hi I'm Linda.

I been involved in AI Safety for a few years no, mainly learning and organizing events. Once I had the ambition to be a AI Safety researcher, but I think I'm just to impatient (or maybe I'll get back to it one day, I don't know). At the moment I am mainly focusing on helping others becasue I have found that I like this role. But I am always up for discussing technical research, becasue it is just so interesting.

What are some things people can talk to you about?

  • AI Safety - I'll discuss your research idea with you and/or share some career advise
  • Physics (I have a PhD in Quantum Cosmology)
  • Productivity coaching - This is a skill I'm developing, so really you are doing my a favor if you let me practice on you.

What are things you'd like to talk to other people about?

  • I want to talk to aspiring and early career AI Safety researchers, to learn about your situation and what your bottle necks are.
  • I want to talk to anyone who is doing or wants to do any sort of AI Safety career support.
  • Help me review my plans, and if warranted give me social validation.

How can people get in touch with you?

Email: linda.linsefors@gmail.com

Meeting: Calendly

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.

Get in touch

g@gleech.org . I also like the sound of this open-letter site.

Who are you?

I'm Michael Aird. I'm a researcher/writer with the existential risk strategy group Convergence Analysis. Before that, I studied and published a paper in psychology, taught at a high school, and won a stand-up comedy award which ~30 people in the entire world would've heard of (a Golden Doustie, if you must know).

People can talk to me about

  • Things related to topics I've written about, such as:
  • I might be able to help you think about what longtermism-related research, career options, etc. to pursue, based on my extended hunt. But I'm pretty new to the area myself.
  • EA clubs/events/outreach in schools. I don't do this anymore, but could share resources or tips from when I did.

I'd like to talk to other people about

  • Pretty much any topic I've written about!
  • How to evaluate or predict the impacts of longtermist/x-risk-related interventions
  • Relatedly, theory of change for research or research organisations (especially longtermist and/or relatively abstract research)
  • Feedback on my work - about anything from minor style points to entire approaches or topic choices
  • Other topics people think it'd be useful for me to learn about, research, and/or write about

How to get in touch:

Send me a message here, email me at michaeljamesaird at gmail dot com, or book a time to chat here.

Who are you?

Hi, my name is Simon. I am a physicist, research engineer and data scientist (actually this is rather the transition I went through or am currently undergoing). ATM I am leaving applied photovoltaics research to follow the project of building an app that supports people to purchase more sustainable, socially responsible and healthier products (blog post of our CEO). We are currently validating our idea and assessing the feasibility, so I am always open for new adventures (take a look at the "I'd like to talk about" section).

People can talk to me about:

  • Renewable energies (esp. photovoltaics & nuclear fusion)
  • Nuclear security: I am not an expert in this, but might be able to answer/assess some questions from a technical point of view)
  • Climate Change
  • Sustainability
  • Data Science (Python & R) & Scientific modelling / simulations
  • General physics, research and conceptual engineering questions
  • Productivity & Self-management
  • Hobbies:
    • Music: esp. piano, singing
    • Sports: Body weight exercises, Mountainbiking, Tennis, Yoga
    • Travelling

I'd like to talk about:

  • Global priorities research & futurology (consider to dive into this professionally, so am open for any job offers globally)
  • Using Machine Learning & Data Science to tackle the most pressing problems. Most decision-making processes rely on previous evidence and could benefit from data-driven approaches. If anyvia data-driven approaches, so am open for any job offers globally)
  • AI in general: if you know a specific area where I could have a large impact with my skill set, I am very happy to receive any recommendations
  • Alternative economic systems / System change: Except for unconditional basic income I don't have a lot of knowledge on alternatives that are somewhat likely to work, but really would love to learn more and start to communicate this ideas in my cultural environment.

How to get in touch: s.haberfellner (at) gmx.net (very happy to communicate on other channels after first contact).

Stay safe! :-)

Who are you?

Roland Pihlakas, from Estonia. I studied psychology, worked on modelling of natural intelligence, and then on AI safety. Have always cared about environmental issues. I work as a software developer, mainly having built AI and ML software. I tend to have many ideas and like interdisciplinary research. My AI safety related blog can be found here: https://medium.com/threelaws/

What are some things people can talk to you about? (e.g. your areas of experience/expertise)


  • Legal responsibility assignment in AI based systems.
  • Autonomous weapons systems.
  • Whitelisting, reversibility.
  • Human augmentation.
  • Pending workforce disruptions due to increasing automation.
  • Tax changes needed due to increasing automation, a better alternative to "the robot tax".
  • Fundamental limits to AI safety, self-deception, embedded agency.
  • How to formalise multiple AI goals, an alternative to utility maximisation.
  • Corrigibility, interruptibility, low impact AI.
  • How to promote and educate people about AI safety topics.
  • How to reduce the speed of technological innovation.
  • Organisations as AGI.


  • Social attitude changes that might be attractive to people and helpful for climate.
  • How to educate people about climate issues and solutions.


  • Rationality.
  • Various methodologies for improved communication.
  • Learning new helpful habits, improving memory for existing habits.

What are things you'd like to talk to other people about? (e.g. things you want to learn)

  • All above mentioned topics.
  • How to have a career in AI safety.
  • How to have a career in climate.
  • How to have a career in other EA topics.

How can people get in touch with you?

  • Facebook: using my name.
  • Skype username: "roland" (full name "Roland Pihlakas").
  • Email: roland@simplify.ee (but I might be slow to respond or notice, above mentioned methods are better).
  • I am also able to use Zoom and Slack, if you prefer so.

Suggested format for answers:

  • Who are you?

I'm Axo Sal (Linkedin.com/in/AxoSal). Developing Utilitarianaissance.Community that is a virtual society with a mission to explore, express and execute the most impactful ideas, solutions and strategies across the fields. It's still in the early stages. It would be great to find like-minded people who could pursue the mission. I want to leave a legacy that will live on and make the impact even without me.

  • What are some things people can talk to you about? (e.g. your areas of experience/expertise)

I'm a seeker of root-/meta- or universal-solutions that can lead to solving many problems at once or completely redefine the world and humanity. Societal systems change in the economy and governance/decision-making frameworks, AGI, radically enhancing human consciousness/intelligence: Neuralink, Bioengineering among other things seem to be some of such potentialities discovered so far. Reach out to me if you got some more and let's discuss those solutions!

  • What are things you'd like to talk to other people about? (e.g. things you want to learn)

I guess the same as above.

  • How can people get in touch with you?

Linkedin.com/in/AxoSal, Facebook.com/AxoSal or Utie.World, Utilitarianaissance.Community

Sorted by Click to highlight new comments since:

In London there is a directory that some people have used to arrange 1-1s, I think there are a few others for different locations, careers and causes. I don't know if it's better to have one master directory/CRM/messaging capability on the EA hub or for each group to have their own way of networking.

Potentially there could be both. And if people enter themselves in the master directory, they could get a pop up informing them of their local group's directory or way of networking, and could be asked if they're happy for their info to be automatically added there as well. And vice versa if they add their info to the local group's directory (or equivalent), as long as the local group's approach involves people adding their info in some way.

That way both the centralised and local versions could grow together, but people would still have the choice to just be involved in one or the other if they prefer.

Just a thought - not sure how easy/useful it'd be to actually institute.

Personally, I think I'd benefit from and appreciate something like that system, if someone else put in the work to make it happen :D

Curated and popular this week
Relevant opportunities