Miranda_Zhang

Operations Generalist @ Anthropic
Working (0-5 years experience)
Pursuing other degree/diploma
Pursuing a doctoral degree (e.g. PhD)
Working (6-15 years of experience)

Bio

Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.

Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.

Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.

How others can help me

  • Learn more about s-risk: talk to me about what you think!
  • Learn how to get the most out of my first job and living in the Bay Area
  • Seek advice on how to become excellent at operations (and/or ways to tell that I may be a better fit for community-building or research)

How I can help others

  • Talk about university groups community-building
  • Be a sounding board for career planning (or any meta-EA topic)
  • Possibly connect you to other community-builders

Sequences
1

Building My Scout Mindset

Comments
197

To clarify, I agree that 80k is the main actor who could + should change people's perceptions of the job board!

I find myself slightly confused - does 80k ever promote jobs they consider harmful (but ultimately worth it if the person goes on to leverage that career capital)?

My impression was that all career-capital building jobs were ~neutral or mildly positive. My stance on the 80k job board—that the set up is largely fine, though the perception of it needs shifting—would change significantly if 80k were listing jobs they thought were net negative if they didn't expect the person to later take an even higher-impact role because of the net negative job.

I always appreciate reading your thoughts on the EA community; you are genuinely one of my favorite writers on meta-EA!

woah! I haven't tried it yet but this is really exciting! the technical changes to the Forum have seemed impressive to me so far. I also just noticed that the hover drop-down on the username is more expanded, which is visually unappealing but probably more useful.

I love these changes, especially dis/agree voting! Thank you!

I didn't know this and I'm grateful that you flagged this!

Thanks for contributing to one of the most important meta-EA discussions going on right now (c.f. this similar post)! I agree that there should be splinter movements that revolve around different purposes (e.g., x-risk reduction, effective giving) but I disagree that EA is no longer accurately described by 'effective altruism,' and so I disagree that EA should be renamed or that it should focus on  "people who want to help the less fortunate (humans or animals) to the best of their abilities, without anyone trying to be convince them that protecting the far future is the most altruistic cause area." I think this because

  • The core pitch still gets you to effective giving and longtermism. I think the core pitch for EA is not "We have high-quality evidence to show that some charities are orders of magnitude more effective than others.  If you want do the most good with your money, you should find organizations that are seen and measured to be high impact by reputed organizations like GiveWell," but rather, using evidence & reason to maximize good ➡️ the scale/tractability/neglectedness framework.
    • This easily gets you to effective giving (cause prioritization + finding cost-effective interventions) but can also get you to longtermism. For an example pitch that does both, see UChicago's 2021 info presentation.
    • Of course, this might not be the right pitch for everyone! It seems reasonable that workplace groups might focus more on effective giving—but doing so would, I expect, make it harder to shift towards longtermism than the broader and more philosophical core pitch.
  • EA appeals to a specific, distinct audience that isn't captured by a movement that is solely focused on existential risk or more near-term causes. Insofar as EA aspires to be a question, not an ideology, convening people who are interested in taking the fundamental assumptions of EA (cause prioritization etc.) and applying them to social impact seems crucial for sustainably finding answers to the question of, "how do we do the most good?" Without an umbrella movement, I would be concerned that it would be less likely that people with different knowledge and experiences would exchange ideas on how to do good, making it less likely that any given community would be able to quickly adapt to how priorities shift as the world changes.
  • While EA might not be the best name or acronym, I think it does a decent job of capturing the underlying principles: prioritizing so we can be most effective when we aim to act altruistically.

I also think that EA community builders are doing a decent job of creating alternative pipelines, similar to your proposal for creating an x-risk movement. For example,

There's still a lot more work to be done but I'm optimistic that we can create sub-movements without completely rebranding or shifting the EA movement away from longtermist causes!

To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?

Why do you think you'd need to "force yourself?" More specifically, have you tested your fit for any sort of AI alignment research?

If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn't the right kind of person to work on technical research ... But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time  coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly, feeling like I wasn't the most terrible fit for theoretical research!

If you're interested in testing your fit, here are some resources

I could possibly connect you to people who would be interested in testing their fit too, if that's of interest. In my experience, it's useful to have like-minded people supporting you!

Finally, +1 to what Kirsten is saying - my approach to career planning is very much, "treat it like a science experiment," which means that you should be exploring a lot of different hypotheses about what the most impactful (including personal fit etc.) path looks like for you.

edit: Here are also some scattered thoughts about other factors that you've mentioned:

  • "I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely)."
    • FWIW, the leading AI labs (OpenAI, Anthropic, and I think DeepMind) are all for-profits. Though it might be contested how much they contribute to safety efforts, they do have alignment teams.
  • "would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?"
    • What do you mean by "utility positive" - utility positive for whom? You? The world writ large?
    •  
  • "Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?"
    • I don't think anyone can answer this besides you. : ) I also think there are at least three questions here (besides the question of what you are good at/what you like, which imo is best addressed by testing your fit)
      • How important do you think AI alignment is? How confident are you in your cause prioritization?
      • How demanding do you want EA to be?
      • How much should you defer to others?
         

edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!

Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!

However, I'm not really following your argument.

Several of your arguments are arguments for the view that "intrinsically positive lives do not exist,"  [...] It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

  • This doesn't necessarily follow, as Magnus explicitly notes that "many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence." So given that everyone reading this already exists, there is in fact potential positive value in continuing our existences.
  • However, I may have missed some stronger views that Magnus mentions that would lead to this implication. The closest I can find is when Magnus writes, some "views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do." As I understand that, though, this means that there can be positive value in lives, specifically lives that are interacting with others?

I wouldn't be surprised if I'd just missed the relevant view that you are describing here, so I'd appreciate if you could point to the specific quotes that you were thinking of.

Finally, you are implicitly assuming hedonism + consequentialism — so if it turned out that happiness had no intrinsic value, there’s no reason to continue life. But you could hold a suffering-focused view that cares about other values (e.g. preference satisfaction), or a form of non-consequentialism that sees intrinsic value in life beyond happiness. (Thanks to Sean Richardson for making this point to me!)

Load More