Hide table of contents

Summary

Training for Good is now offering 1-to-1 coaching for EA professionals. 

Successful applicants will have a 3 month coaching relationship with a professional coach from the EA ecosystem: Daniel Kestenholz, Tee Barnett or Steve Thompson. This will entail four sessions, each 60 minutes, spaced roughly 3 weeks apart.

Our goal is to help you clarify your aims, reduce self-imposed friction, and to help you improve your leadership and relationship skills. We’ll help you debug your plans and increase your overall contribution to the world while taking care of yourself.

Fees for the coaching will be means-based and you will decide the amount you pay (between $0 and $175).

Apply here by 25th February.

The Coaches

We have brought together a small panel of coaches from across the community who have a variety of backgrounds. 

  • Daniel Kestenholz: “I help people in the community face difficult challenges and make their biggest contribution to the world while taking care of themselves. My clients include staff at CEA/80K, Charity Entrepreneurship, CLR/EAF, FHI, Rethink Priorities, and OpenAI”.
     
  • Tee Barnett: “Working with me is about having a ‘personal strategist’ that helps guide the discovery, navigation and refinement of deep perceptual constructs that meaningfully affect your personal and professional life. I’m currently trialling my coaching with leaders of organisations in Effective Altruism (Rethink Priorities, HLI, Nonlinear Fund, Fortify Health, Centre for Long-Term Resilience, etc.) 
     
  • Steve Thompson: “I have coached and trained mid and senior level leaders for ~10yrs working as a leadership development consultant in the corporate sector.  I’m interested in helping you simply and plainly identify the areas of your work and life that, if improved, would make the most difference for you and the world”.

The Content

Coaching differs from mentoring or advising in that it is primarily led by the client. The coach is there to help you think through the aspects of your life and work that you most want to focus on and to do so with a higher degree of accountability and investigation than you likely generate alone. So the “content” of coaching sessions therefore is not a set of topics, but rather you’ll be looking at how you think, and how that affects what you achieve.

Fees & Application

Training for Good is organising this initiative, vetting the participants, and measuring the impact. 

Fees for the coaching will be means-based and you will decide the amount you pay. TFG will subsidize this coaching to ensure it is available to those who’ll most benefit from it, regardless of their financial circumstances. Those with budgets from their employers will be expected to pay $175 per session, but those who are paying directly can choose to pay what you like on the basis that it’s the maximum (i.e. it’s not prohibitive) you can afford while being reflective of the value of the coaching and the coach's time and experience. 

Apply here by 25th February.

Comments6


Sorted by Click to highlight new comments since:

This is super cool and seems to fill a current gap in the community (e.g., career advising is more one-off). Whom is your target audience/do you think is most likely to benefit?

Thanks for the question Miranda! We think coaching could be beneficial to a lot of different people. A few groups we had in mind that might particularly benefit from this coaching include:

  • EAs leading organizations (both non EA and EA orgs)
  • EAs managing (small) teams within EA orgs / EA chapters
  • EAs outside of EA orgs that work in roles where human interaction is very important for relative success. Examples might include policymaking, grantmaking or some E2G roles
  • Early career EAs currently on high impact career trajectories (eg. on track to enter an 80k priority path)

However, I'd encourage anyone who's on the fence or that doesn't quite fit into the above groups to just go ahead and apply - or feel free to reach out to me directly cillian [at] trainingforgood [dot] com and we can chat about it

I have a similar question to Miranda! More specifically, how do group organizers fit into your target audience/thoughts regarding scope for benefit?

We'd be pretty excited to see applications from group organisers. I think it's a really important role and imagine that coaching could help multiply the impact of a lot of organisers!

How is this different from just applying to those coaches directly?

As far as I understand sessions will be fully subsidised by TfG. If you can’t afford them you can choose to pay 0$—unsure if this is standard among EA coaches.

I also think centralisation of psychological services might be valuable as it makes it easier to match fitting coaches/coachees and assess coaching performance.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma