BW

Bryce Woodworth

24 karmaJoined Apr 2022

Bio

Hiya! I'm Bryce, and I've been an EA since 2016. My background is mostly in software engineering, and I have a master's degree in computer science, emphasis in AI.

My biggest passion is in helping cool ambitious people to level-up and solve important problems. If you are reading this, you probably fall into that category!

I have strong intuitions that there is valuable low-hanging fruit in the "talent development for EAs" space. Towards that end, I recently left my job in order to spend some time investigating 1) whether my intuitions are correct, and 2) whether I am personally a good fit to pursue this. I am currently focusing mostly on self-development and coaching skills.

If you are interested in coaching or pair debugging, you can sign up for a conversation at calendly.com/bryce-woodworth. If you have any feedback for me, I'd love to hear it!

Comments
2

If this is right, then we may talk of three different 'minds' at work in solving reasoning problems:

  • The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.

(emphasis mine)

This feels wildly counterintuitive to me, unless "few differences" is much weaker than I'm expecting or "autonomous mind" is a way narrower concept than it looks.  On LW the author gives further elaboration in the comments, which I understand as "some autonomous processes like face recognition seem to be mostly the same between people".

Maybe it's true that most people have nearly-identical performance in those domains. But to me it looks like almost all of the differences between people lie in the autonomous mind. The vast majority of actions I take throughout the day are autonomous. When I observe skill differences between myself and someone else, most of the variance seems to come from differences in our intuitions and pattern-matching, rather than our mindware or algorithmic thinking.

I can't even imagine a worldview that says otherwise, so I'd be curious to hear from anyone who legitimately agrees with the "few individual differences in autonomous reasoning" model. If this turned out to be correct then I would restructure a lot of how I'm trying to become more generally competent.

I assume you intend this in the direction of "opportunity costs aren't sufficiently salient, so we don't take them into account as much as we should". Which seems true. But I also think part of the difference is that super-duper salient  opportunity costs make normal spending feel pathologically unacceptable, particularly to altruistically-minded folks.

I agree with above that Scott seems to favor framings of limited altruism, after which you can spend your money on whatever frivolous things you feel like.

What I am working on

I am currently pursuing an EA-motivated personal project. Posting the details on the EA forum seems like a great way to get feedback, accountability, and a stronger sense of community. I’d like to write out a full post at some point, but I’m starting with a shortform to ease into things and make it less scary.

My current mission is to become an excellent coach, specializing in personal growth and talent development. My previous role was in software engineering, where I earned-to-give and accumulated a financial runway, which I am now using to give myself ~1 year to focus on this mission and evaluate if it would be feasible in the longer-term.

The argument for coaching

The real reason I chose this mission is because that is what gives me the most joy and satisfaction. I’ve always preferred support and multiplier roles, and I find it much easier to get better at things I love to do at a gut-level. That being said, I also believe at an intellectual level that there is a lot of low-hanging fruit in the realm of EA talent-development. This feeds into my sense of motivation, but it’s a red-flag for motivated reasoning, which is part of my desire to write out my intuitions more explicitly. For now I will just outline some key beliefs and intuitions.

Belief: There is a significant gap between many EA’s potential for impact, and the amount of impact they will actually make by default.

This feels pretty self-explanatory to me so I won’t discuss it much, but I’d love to hear from anyone who disagrees.

Belief: There are cost-effective ways to help narrow that gap.

Intuition-pump 1: Having regular access to high-quality pair-debugging sessions feels personally valuable to me, and to many other EAs I’ve talked to, to the extent that it dwarfs the costs involved. A significant proportion of the EAs I know have some desire for regular debugging, yet aren’t getting it currently. If someone became reasonably good at debugging, it feels like doing only that could be higher-impact than most other options. That doesn’t feel like a high bar. Possible variations include getting good at teaching other EAs how to be good pair-debuggers.

Intuition-pump 2: If you got 3 highly-skilled coaches to launch an intensive program in which 12 early-career EAs spent 2 years building long-term skills and personal capital, it seems like it wouldn’t have to multiply their lifetime impact by that much before it was net-positive. Naively, investing 30/12=2.5 person-years each, breaking even at <7% additional expected impact over a 40-year career. It seems like it really ought to be possible to exceed that bar, and that’s with an unusually high-cost proposal!

Intuition-pump 3: I’ve been hearing for years that many EA cause-areas are primarily talent-constrained. This means that increasing the level of talent among EAs would be highly impactful. Most of the discussion I’ve seen focuses on either getting highly-talented people into the EA community, or getting EAs to try more ambitious projects to see if they are secretly more talented than they think. Looking at EA infrastructure fund grants can serve as an indication. As far as I can tell, talent-development for existing EAs seems neglected. The above intuitions point to it being tractable as well.

Counter-intuition: If there was low-hanging fruit, other EA orgs would have already picked them. If EA meta-organizations aren’t prioritizing this very much, doesn’t that indicate they don’t think it’s valuable? Maybe, but Inadequate Equilibria convinced me that this shouldn’t stop me from acting on my own beliefs. At the very least I can write up thoughts on the EA forum and see if there are strong counterarguments I’m not considering.

Key Uncertainties

  • Is the potential-impact gap as big as I think it is?
  • Is talent development as tractable as I think it is?
  • Which interventions are most effective?
  • How can I tell whether I am a good personal fit?
  • What are the relevant skills and how can I best pursue them?

What I’m doing now

  • Attempting to have coaching/debugging conversations with a variety of people, especially those outside my usual social circle. I'm trying to get a sense of what the most common bottlenecks are, how I might learn to help people get past them, and how much value I seem to be providing. (If you are interested, feel free to sign up on my calendly!)
  • Reading personal-growth and coaching books. I’ve read 11 so far, and I’d like to start distilling my thoughts into book reviews.
  • Starting to engage more with the EA forum, particularly commenting and posting.
  • Trying to learn a variety of skills quickly, with 10-20 hours of investment per skill. This is partly to get better at learning, but also to help me build confidence in my ability to do new things in general, which has often limited me.