Mjreard

Advising Team @ 80,000 Hours
374 karmaJoined Dec 2017Working (6-15 years)London, UK
twitter.com/Mjreard

Bio

Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team

How others can help me

Apply for a 1-1 call with 80k. Yes, now is a good time to do it – you can book in later, we can have a second call, come on now. 

Follow me on Twitter and listen to my podcast (search for "actually after hours" on YouTube or podcast apps)

Posts
1

Sorted by New
31
· 5y ago · 15m read

Comments
27

Yes, in general it's good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the "1 into 2" reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn't stop at earning to give. 

A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn't that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.  

Looks like the UK hardcover release isn't until 21 May, but it's available on Kindle? Is that right? 

If the lives of pests are net negative,* I think a healthy attitude is to frame your natural threat/disgust reaction to them as useful. The pests you see now are a threat to all the future pests they will create. It's imperative to the suffering of those future creatures that the first ones don't live to create them. Our homes are fertile breeding grounds for enormous suffering. I think creating these potential breeding grounds gives us a responsibility to prevent them from realizing that potential. 

I take the central (practical) lesson of this post to be that that responsibility should spark some urgency to act and overcome guilt when we notice the first moth or mouse. We've already done the guilty thing of creating this space and not isolating it. The only options left are between more suffering and less.  

Thank you for the post!

 

*I mean this broadly to include both cases where their lives are net negative in the intervention-never scenario and in (more likely) scenarios like these where the ~inevitable human intervention might make them that way. 

Nice punchy writing! I hope this sparks some interesting, good faith discussions with classmates. 

I think a powerful thing to bring up re earning to give is how it can strictly dominate some other options. e.g. a 4th or 5th year biglaw associate could very reasonably fund two fully paid public defender positions with something like 25-30% of their salary. A well-paid plastic surgeon could fund lots of critical medical workers in the developing world with less.

One important thing to keep in mind when you have these chats is that there are better options; they're just harder to carve out and evaluate. One toy model I play with is entrepreneurship. Most people inclined towards working for social good have a modesty/meekness about them where they just want to be a line-worker standing shoulder-to-shoulder with people doing the hard work of solving problems. This suggests there might be a dearth of people with this outlook looking to build, scale, and importantly sell novel solutions.

As you point out, there are a lot of rich people out there. Many/most of them just want to get richer, sure, but lots of them have foundations or would fund exciting/clever projects with exciting leaders, even if there wasn't enormous (or any) profitability in it. The problem is a dearth of good prosocial ideas – which Harvard students seem well positioned to spin up: you have four years to just think and learn about the world, right? What projects could exist that need to? Figure it out instead of soldiering away for existing things.  

Curious if you've seen or could share botecs on the all-in cost per retreat? 

Naïvely, people like to benchmark 5% of property value per year as the all-in-cost of ownership alone (so ~$750k/yr here? Really not sure how this scales to properties like Wytham). 

I wonder how that compares to the savings in variable retreat costs? Like if you had 20 retreats/yr are you saving (close to?) $32,500 per retreat (assuming 750k is the right number). Accommodation for 25 people for 4 nights in Oxford could plausibly be ~$20k itself, so it seems like with a given number of retreats or attendees, you could get quite close, but the numbers matter here. 

For what it's worth, I think you shouldn't worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren't very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about. 

I also shouldn't comment without answering the question:

  • I balk at thorny or under-scoped research/problems that could be very valuable
    • It feels aversive to dig into something without a sense of where I'll end up or whether I'll even get anywhere
    • If there's a way I can bend what I already know/am good at into the shape of the problem, I'll do that instead
    • One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters 
  • I fail to be a team player in a lot of ways. 
    • I have my own sense of what my team/org's priorities should be
    • I expect others around me to intuit, adopt these priorities with no or minimal communication
    • When we don't agree or reach consensus and there's a route for me to avoid resolving the tension, I take the avoidant route. Things that I don't think are important, but others do, don't happen  

I think this is a version of a more general form of motivated reasoning where one seeks out a variable in an argument which is: 

  1. imprecise, 
  2. ambiguous, 
  3. dependent on multiple other hard-to-track variables, or
  4. a variable over which they can claim unique knowledge (here, 'what I am good at personally and how good at it I am')

which they can then ratchet up to the maximum value for things they want to believe and the minimum value for things they don't want to believe. 

I noticed this acutely in the comments on the 80k/Rational Animations crossover video, namely things like "If you become a doctor, you don't know how many life-saving situations you run into" (imprecision about likelihoods) or "Dr. Nalin couldn't have achieved what he did without the help of many others, down to the bricklayers and garbagemen who provided the essentials he needed to focus" (ambiguity/dependencies about credit).

Finding low-confrontation ways to point such things out seems valuable. Maybe the Scout Mindset remains the best work here.

It is scary and painful for people to admit they were mistaken, especially about their basic narratives concerning what's valuable or what they intended to do with their lives. I'd guess highlighting that truth-seeking is a broader, more-endorsed narrative – that also implies lots of changing your mind – is one way to shake people out of these more contingent narratives. 

I think this characterizes the disagreement between pause advocates and Anthropic as it stood before the Claude 3 release with some pause-advocacy-favorable assumptions about the politics of maintaining one's position in the industry. Full-throated, public pause advocacy doesn't seem like a good way to induce investment in your company, for example. 

More broadly I think Anthropic, like many, hasn't come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.  

As I understand it, [part of] Anthropic's theory of change is to be a meaningful industry player so its safety agenda can become a potential standard to voluntarily emulate or adopt in policy. Being a meaningful industry player in 2024 means having desirable consumer products and advertising them as such. 

It's also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race. 

Too high. I thought there were huge scaling barriers based on something Linch wrote ~2 years ago. Maybe that's wrong or been retracted. 

Load more