Hide table of contents

Like many others, I find one-on-one meetings to be the most valuable part of EA Global conferences. Last EAG, I met ~60 EAs with relatively little hassle, and really enjoyed the experience. Here's a few tips on how to schedule a lot of high-quality 1-1s, and how to get the most out of them.

Bio: Explain what you want to discuss

  • What are 2-3 action-relevant questions you want to resolve during EAG?
    • Personally, I want to find a side project to work on, to decide if I should find work on COVID projects in order to explore careers in biorisk, and to decide if I should work on any projects related to racial justice.
    • Examples: Find a job, learn about a particular organization, decide which global development charity to give to, workshop your plan for exploring career options in a field, learn about different areas of AI technical research, share effective fundraising methods, find funding for an independent research project
  • How can you help people?
    • On my profile, I offer to help people talk through a bit of career planning, and mention that I can potentially help find internships and jobs because my company is rapidly hiring.
    • Other good examples I've seen people offer help with: Working from Home productivity, learning to meditate, rock climbing, search engine optimization, how to skill up in ML, how to do good in policy careers, how to navigate the relationship between EA and faith, discussing promising new causes, lessons learned from running an EA group, etc.
    • Everyone has something to offer! These don't have to be difficult, tangible favors. The most common and effective way to help someone is by sharing some ideas about a topic you've thought about for a while - what have you spent a lot of time thinking about?
  • What topics are you generally interested in discussing?
    • Examples: whether AI will have slow or fast takeoff, the effectiveness of corporate campaigns for animal welfare, forecasts for the future of COVID, how EA should grow, how to found a startup or new charity.
    • These might be less urgent or less actionable than the above answers, but still something you'd be happy to discuss. Oftentimes an object level question within an area of EA interest.
  • Also, fill out the rest of your profile: profile picture, occupation, location, interests, and so on.

Booking a call should be easy: try Calendly

Calendly lets you schedule meetings without back-and-forth messages. You tell Calendly when you're free and your preferences for the calls, and it gives you a link that you can put in your bio and send to people. With the link, people can book time in your calendar.

Why use Calendly? Because scheduling time is way too hard. It often takes several back-and-forth messages spaced over hours and days. If it takes 10 minutes to book a call with someone, and you want to meet two dozen people, that's four hours spent scheduling!

Calendly makes scheduling fast and easy. No accidentally double-booking two people for the same slot, no confusion over time zones, no missed messages and lost connections. It even includes a free link for Google Meet or Zoom. (Grip also has some scheduling functionality, but I haven't had a great experience with it.)

Videochat can be great for friendliness and making a connection, but sometimes I find it’s a hassle to set up my camera and straighten my hair for a quick meet-and-greet. Consider saying in your Calendly info that video is optional, and anyone is welcome to use audio-only calling. (Thanks to mjamer for the idea!)

Not just a weekend: Schedule future meetings

Lots of people get really busy, really fast at EAG. I've had a lot of success booking meetings after EAG is officially over, either later that week or several weeks in advance. More time between meetings means less stress, and more opportunities for people to meet with you.

I'd recommend explicitly saying in your bio that you're open to talk after the conference is over.

Intro Messages: Individualized & Meeting-Oriented

What's the point of an intro conversation? Personally, I'm not having many meaningful interactions in the chat room - if someone is interesting, I'd rather speak over the phone. So my intro conversations are for answering two questions: Do I want to spend time talking with this person? And if so, when can we talk?

In my very first message to someone, I try to say what I found interesting about their profile and give specific topics or questions I'd like to discuss, so they can judge whether they want to speak with me or not. (Just a sentence or two usually works.) Then I give them an easy way to book time, either by sending my Calendly link or by naming several specific times I'm free.

For example:

Hi there [Name] , would you like to chat? I'd love to hear about your experience [running a local group / skilling up in ML / campaigning for animal welfare laws / etc.]. I also think I might be able to help you with [brainstorming career plans / finding a side project / thinking about AI timelines / etc.].
Feel free to book any time in the next month in my calendar! https://calendly.com/aidanogara/eagx-chat

After that, converse to your heart's content! The crucial questions have already been handled.

Send reminders before meetings

Sometimes I forget that I booked a call, so I always appreciate when someone sends me a message before a meeting: "Hey, are you good to talk in 10 minutes? I'll be on this Google Hangouts link: [link]."

During calls, figure out what the other person loves

Discussing the same broad questions about generic EA topics can get boring - the reason everyone asks these questions is because nobody knows the answer!

My favorite conversations happen when the other person starts talking about something they love and know lots about. We abandon the original goals for the conversation and dive down the rabbit holes of passion. For example:

  • When speaking with a Berkeley CS PhD student, I planned on asking generic questions about AI Safety, but he turned out to know a ton about computer security and hacking. We spent an hour talking about systems he'd hacked and the dangers of security for AI.
  • When speaking with a climate change policymaker, I was going to ask whether she thought climate change was an X-Risk, but then found out that she was deeply involved in Christians for EA. We ended up talking about our personal religious lives and how EA fits in.

After the call, do a Five Minute Favor

Help the person you just spoke with in the simplest way you can - send them a link to an article you think they'd enjoy, or connect them via email with another EA you think they'd like to speak to. Then, if you'd like, connect with them online (LinkedIn and Facebook are popular).

Personal Invitation

If you would like to meet with me to chat about anything, I'm always happy to talk! Pick a time with my Calendly, and I look forward to talking soon: https://calendly.com/aidanogara/eagx-chat


Related: Risto Uuk on “How to get the maximum value out of effective altruism conferences” - https://forum.effectivealtruism.org/posts/5hKDjrGocGcreH3DC/how-to-get-the-maximum-value-out-of-effective-altruism

Comments7


Sorted by Click to highlight new comments since:

Hi Aidan, thanks for the great tips! I read this after the conference, so I’ll have to use some of them next time. “Dive down the rabbit holes of passion” is a fantastic phrase.

Do you have thoughts on mentioning the option for audio-only calls to people you’re wanting to chat with, in case that would make them more comfortable and/or willing to meet in the first place? I for one have only recently gotten over some of my nerves surrounding video calls with new people. Now I find them engaging, but sometimes distracting as well.

During the conference, I wondered if some people find face-to-face networking daunting, and would appreciate the option—for them perhaps an additional benefit of a virtual conference—to choose how and when they use video. I think going forward I’ll offer the option to people I’m connecting with (for them and for me).

Hope you had a great conference!

Hey, that’s a great idea. Videochat can be nice, but definitely isn’t necessary.

I‘ve edited the post to recommend noting that videochat is optional, and audio-only is perfectly good.

Thanks for the thought and the kind words! Hope you had a great conference too :)

Interesting to hear about the flexible take you already had on video/audio. Thanks for the response and for editing the post!

Great tips! I'll definitely be actioning these ideas.

aog
10
0
0

Thanks!

Sidenote: I love when people praise my contributions to the EA Forum. Posting here can be intimidating - the bar for quality of conversation is often really high, disagreements can be harsh, and especially when using my real name I don't want to earn a bad reputation. So when other people offer positive feedback or sincere gratitude, it makes me really happy and encourages me to post more often.

If you want to encourage more discussion on the EA Forum, thank someone for their contributions. So thank you Michael!

Agreed!

In the same spirit, I'll thank Aaron Gertler for his EAGx talk where he made a similar point to what you just said, which was part of what prompted me to actually write that comment. I initially just thought to myself "Great tips! I'll definitely be actioning these idas", then moved away from this post without commenting, but then thought "Wait, I should actually let Aidan know. #WhatWouldGertlerDo"

Link to your calendly seems to be broken now, would love to discuss some one-on-one thoughts sometime if you'd like to update!

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read