Miranda_Zhang

Operations Generalist @ Anthropic
Working (0-5 years experience)

Bio

Operations Generalist at Anthropic & Former President of UChicago Effective Altruism. Suffering-focused.

Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.

Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.

How others can help me

  • Learn more about s-risk: talk to me about what you think!
  • Learn how to get the most out of my first job and living in the Bay Area
  • Seek advice on how to become excellent at operations (and/or ways to tell that I may be a better fit for community-building or research)

How I can help others

  • Talk about university groups community-building
  • Be a sounding board for career planning (or any meta-EA topic)
  • Possibly connect you to other community-builders

Sequences
1

Building My Scout Mindset

Comments
183

Effective Persuasion For AI Alignment Risk

Thanks, this makes sense! Yeah, this is why many arguments I see start at a more abstract level, e.g.

  • We are building machines that will become vastly more intelligent than us (c.f. superior strategic planning), and it seems reasonable that then we won't be able to predict/control them
  • Any rational agent will strategically develop instrumental goals that could make it hard for us to ensure alignment (e.g., self-preservation -> can't turn them off)
Effective Persuasion For AI Alignment Risk

This makes a lot of sense, thanks so much! 

I think I agree with this point, but in my experience I don't see many AI safety people using these inferentially-distant/extreme arguments in outreach. That's just my very limited anecdata though.

Effective Persuasion For AI Alignment Risk

I'm always keen to think about how to more effectively message EA ideas, but I'm not totally sure what the alternative, effective approach is. To clarify, do you think Nintil's argument is basically the right approach? If so, could you pick out some specific quotes and explain why/how they are less inferentially distant?

If you fail, you will still be loved and can be happy; a love letter to all EAs

Oh, I love(!) this. Really resonates, particularly the idea that feeling like your worth depends on your impact perversely reduces your capacity to take risks (even when the EV suggests that's what you should do).

I feel like this idea of unconditional care has been the primary driver of my evolving relationship with EA. FWIW, I think a crucial complement to this is cultivating the same sense of care for yourself.

Leaning into EA Disillusionment

Thank you for this, particularly in a way that feels (from someone who isn't quite disillusioned) considerate to people who are experiencing EA disillusionment. I definitely resonate with the suggestions - these are all things I think I should be doing off, particularly cultivating non-EA relationships since I moved to the Bay Area specifically to be in an EA hub.

Also really appreciate your reflection on 'EA is a question' as more of an aspiration than a lived reality. Myself, along with other community-builders I know, would point to that as a 'definition' of EA but would (rightly) come across people who felt like that simply wasn't very representative of the community's culture.

How to start a blog in 5 seconds for $0

Thanks for this! I'm hoping to start a future-proof personal website + blog and was looking into using Hugo w/ Github pages. What do you think of using static site generators as opposed to, say, Blot?

Announcing: EA Engineers

So excited you are launching this! Great to see more field-building efforts.

We need more discussion and clarity on how university groups create value

Liked this a lot - reframing the goal of CB as optimizing for high alignment and high competence is useful.

I'm not sure I totally agree, though. I want there to be some EA community-building that is optimizing for alignment but not competence: I imagine this would be focused on spreading awareness of the principles—as there (probably) remains a significant number who may be sympathetic but haven't heard of EA—as well as encouraging personal reflection, application, and general community vibes. I haven't totally let go of the Singer & GWWC vision of spreading EA memes throughout society.

However, I do think optimizing for alignment + competence is the right direction for meta-EA (e.g., talent search to help tackle X cause), and helps explain why I think field-building is the frontier of meta-EA.

A summary of every Replacing Guilt post

Thank you for doing this - never thought I wanted this, but I definitely do! I also took notes but very messily, and it's so useful to have a summary (especially for people who haven't read it yet).

Less often discussed EA emotional patterns

Strongly upvoted for fleshing out and articulating specific emotional phenomena that (a) I think drew me to EA and (b) have made it hard for me to actually understand + embody EA principles. I've perused a lot of the self-care tag and I don't think anyone has articulated it as precisely as you have here.

The below quote, in particular, captures a learning that has been useful for me (if still leaning into using impact as a justification/rationale).

Ironically, having your impact define your self-worth can actually reduce your impact in multiple ways

Load More