Miranda_Zhang

I'm a senior at The University of Chicago, majoring in Public Policy. Highly uncertain about how to integrate EA with my career path but trying hard, hopefully through some intersection of policy + narrative change + movement-building.

Looking for full-time roles starting in June 2022!

Sequences

Building My Scout Mindset

Topic Contributions

Comments

LW4EA: Beyond Astronomical Waste

I feel like this post relies on an assumption that this world is (or likely could be) a simulation, which made it difficult for me to grapple with. I suppose maybe I should just read Bostrom's Simulation Argument first.

But maybe I'm getting something wrong here about the post's assumptions?

Look Out The Window

Really fantastic. Feels like this could be the new 'utopia speech!'

Happiness course as a community building exercise and mental health intervention for EAs

Thanks for this! This is exactly the kind of programming I was thinking of when I reflected on the personal finance workshop I ran for my group.

Question - what leads you to think the below?

The happiness course increased people’s compassion and self-trust, but it may have reduced the extent to which they view things analytically (i.e. they may engage more with their emotions to the detriment of their reason).

Longtermism, aliens, AI

I think there's room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with "the flourishing of moral agents in general, rather than just future generations of people." My belief largely draws from one of Michael Aird's posts.

This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity's potential. This includes both our values⁠—which could lead to wanting to protect alien life, if we consider them moral patients and so factor them into our moral calculations—and potential super-/non-human descendants. 

However, I'm less certain that longtermists worried about x-risk would be happy to let AI 'take over' and for humans to go extinct. That seems to get into more transhumanist territory. C.f. disagreement over Max Tegmark's various AI aftermath scenarios, which runs the spectrum of human/AIcoexistence.

What comes after the intro fellowship?

Thanks for writing this up - I definitely feel like the uni pipeline needs to flesh out everything between the Intro Fellowship and graduating (including options for people who don't want to be group organizers). 

Re: career MVP stuff, I'm running an adaptation of GCP's career program that has been going decently! I think career planning and accountability is definitely something uni groups could do more of.

LW4EA: How to Not Lose an Argument

Hmm. I am sometimes surprised by how often LW posts take something I've seen in other circumstances (e.g., CBT) and repackages it. This is one of those instances - which, to be fair, Scott Alexander completely acknowledges!

I like the reminder that "showing people you are more than just their opponent" can be a simple way to orient conversations towards a productive discussion. This is really simple advice but useful in polarized/heated contexts. I feel like the post could have been shortened to just the last half, though.

What We Owe the Past

Upvoted because I thought this was a novel contribution (in the context of longtermism) and because I feel some intuitive sympathy with the idea of maintaining-a-coherent-identity.

But also agree with other commenters that this argument seems to break down when you consider the many issues that much of society has since shifted its views on (c.f. the moral monsters narrative).

I still think there's something in this idea that could be relevant to contemporary EA, though I'd need to think for longer to figure out what it is. Maybe something around option value? A lot of longtermist thought is anchored around preserving option-value for future generations, but perhaps there's some argument that we should maintain the choices of past generations (which is, why, for example, codifying things in laws and institutions can be so impactful).

Beware Invisible Mistakes

Thanks for synthesizing a core point that several recent posts have been getting at! I especially want to highlight the importance of creating a community that is capable of institutionally recognizing + rewarding + supporting failure.

What can the EA community do to reward people who fail? And - equally important - how can the community support people who fail? Failing is hard, in no small part because it's possible that failure entails real net negative consequences, and that's emotionally challenging to handle.

With a number of recent posts around failure transparency (one, two, three), it seems like the climate is ripe for someone to come up with a starting point.

Nathan Young's Shortform

I actually prefer "scale, tractability, neglectedness" but nobody uses that lol

Miranda_Zhang's Shortform

I wonder if anyone has read these books here? https://www.theatlantic.com/books/archive/2022/04/social-change-books-lynn-hunt/629587/?utm_source=Sailthru&utm_medium=email&utm_campaign=Future%20Perfect%204-19-22&utm_term=Future%20Perfect

In particular, 'Inventing Human Rights: A History' seems relevant to Moral Circle Expansion.

edit: I should've read the list fully! I've actually read The Honor Code. I didn't find it that impressive but I guess the general idea makes sense. If we can make effective altruism something to be proud of - something to aspire to for people outside the movement, including people who currently denigrate it as being too elitist/out-of-touch/etc. - then we stand a chance at moral revolution.

Load More