All of Miranda_Zhang's Comments + Replies

Thanks a lot for your work on this neglected topic!

You mention,

Those counter-considerations seem potentially as strong as the motivations I list in favor of a focus on malevolent actors. 

Could you give more detail on which of the counter-considerations (and motivations) you consider strongest?

2
Jim Buhler
9mo
Thanks Miranda! :)  I personally think the strongest argument for reducing malevolence is its relevance for s-risks (see section Robustness: Highly beneficial even if we fail at alignment), since I believe s-risks are much more neglected than they should be. And the strongest counter-considerations for me would be   * Uncertainty regarding the value of the future. I'm generally much more excited about making the future go better rather than "bigger" (reducing X-risk does the latter), so the more reducing malevolence does the latter more than the former, the less certain I am it should be a priority. (Again, this applies to any kind of work that reduces X-risks, though.) * Info / attention hazards. Perhaps the best way to avoid these malevolence scenarios is to ignore them and avoid making them more salient.  Interesting question you asked, thanks! I added a link to this comment in a footnote. 

People being less scared to post! (FWIW I think this has increasingly become the case)

Thanks for this - I think this captures a quality (or set of qualities?) that has previously not had so accurate a handle! I think, in many ways, sincerity is the quality that leads people to really 'take seriously' (i.e., follow through in a coherent way) the project of doing good.

I see!  Yes, I agree that more public "buying time" interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky "buying time" interventions that are more useful than technical alignment.

5
Max Clarke
1y
I think probably this post should be edited and "focus on low risk interventions first" put in bold in the first sentence and put right next to the pictures. Because the most careless people (possibly like me...) are the ones that will read that and not read the current caveats

To clarify, you think that "buying time" might have a negative impact [on timelines/safety]?

Even if you think that, I think I'm pretty uncertain of the impact of technical alignment, if we're talking about all work that is deemed 'technical alignment.' e.g., I'm not sure that on the margin I would prefer an additional alignment researcher (without knowing what they were researching or anything else about them), though I think it's very unlikely that they would have net-negative impact.

So, I think I disagree that (a) "buying time" (excluding weird pivotal a... (read more)

8
Max Clarke
1y
Rob puts it well in his comment as "social coordination". If someone tries "buying time" interventions and fails, I think that because of largely social effects, poorly done "buying time" interventions have potential to both fail at buying time and preclude further coordination with mainstream ML. So net negative effect. On the other hand, technical alignment does not have this risk. I agree that technical alignment has the risk of accelerating timelines though. But if someone tries technical alignment and fails to produce results, that has no impact compared to a counterfactual where they just did web dev or something. My reference point here is the anecdotal disdain (from Twitter, YouTube, can dm if you want) some in the ML community have for anyone who they perceive to be slowing them down.

Haha aw, thanks! I would love to keep doing these some day.

To clarify, I agree that 80k is the main actor who could + should change people's perceptions of the job board!

I find myself slightly confused - does 80k ever promote jobs they consider harmful (but ultimately worth it if the person goes on to leverage that career capital)?

My impression was that all career-capital building jobs were ~neutral or mildly positive. My stance on the 80k job board—that the set up is largely fine, though the perception of it needs shifting—would change significantly if 80k were listing jobs they thought were net negative if they didn't expect the person to later take an even higher-impact role because of the net negative job.

5
calebp
2y
I think it is kind of odd to say that the setup of the jobs board is fine but the perspective needs shifiting, as 80k are by far best positioned to change the perspective people have of the jobs board. I am not confident that 80k are making a bad trade off here, the current setup may well be close to optimal given the tradeoffs (including the time tradeoff of bothering to optimise these things). But, I am a bit averse to attitudes of 'it's not this one orgs problem, everyone else needs to change' when it seems more efficient to address issues at the source.  
9
Linch
2y
One obvious example is working in AI companies, particularly companies directly aimed at building AGI. The jobs are default harmful, but it might be good for EAs to be the ones to work there (especially if they are careful/have good judgement and substantial moral courage). But the sign of the direct impact case is at best unclear. The career capital case is comparatively stronger however.

I always appreciate reading your thoughts on the EA community; you are genuinely one of my favorite writers on meta-EA!

6
Luke Freeman
2y
Thanks Miranda, that is very kind of you to say!

woah! I haven't tried it yet but this is really exciting! the technical changes to the Forum have seemed impressive to me so far. I also just noticed that the hover drop-down on the username is more expanded, which is visually unappealing but probably more useful.

I love these changes, especially dis/agree voting! Thank you!

I am looking forward to someone creating a wacky dashboard where we can learn who are the most-upvoted but also most-disagreed-with posters on the Forum.  If we think EA is getting too insular / conformist, maybe next time instead of a Criticism & Red-Teaming contest, we could give out an EA Forum Contrarianism Prize!   :P

I didn't know this and I'm grateful that you flagged this!

Thanks for contributing to one of the most important meta-EA discussions going on right now (c.f. this similar post)! I agree that there should be splinter movements that revolve around different purposes (e.g., x-risk reduction, effective giving) but I disagree that EA is no longer accurately described by 'effective altruism,' and so I disagree that EA should be renamed or that it should focus on  "people who want to help the less fortunate (humans or animals) to the best of their abilities, without anyone trying to be convince them that protecting t... (read more)

To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?

1
howdoyousay?
2y
Three things: 1. I'm mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk.  2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if  * long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars then the composites of a theory of victory/ impact could be...: * reduce X risk pertaining to Ai, bio, others * research / udnerstanding around enabling flourishing / reducing suffering  * stimulate innovation * think through governance systems to ensure technologies / research above used for the good / not evil 3. Definitely not 'advocating for longtermism' as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.

Why do you think you'd need to "force yourself?" More specifically, have you tested your fit for any sort of AI alignment research?

If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn't the right kind of person to work on technical research ... But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time  coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly... (read more)

edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!

Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!

However, I'm not really following your argument.

Several of your arguments are arguments for the view that "intrinsically positive lives do not exist,"  [...] It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be des

... (read more)
7
Mau
2y
Thanks for the thoughtful reply; I've replied to many of these points here. In short, I think you're right that Magnus doesn't explicitly assume consequentialism or hedonism. I understood him to be implicitly assuming these things because of the post's focus on creating happiness and suffering, as well as the apparent prevalence of these assumptions in the suffering-focused ethics community (e.g. the fact that it's called "suffering-focused ethics" rather than "frustration-focused ethics"). But I should have more explicitly recognized those assumptions and how my arguments are limited to them.

+1, though would it be possible to hide comments that are just tags? If so, I think I'd be weakly positive about this feature.

I really liked reading this, as I think it captures my most recent concerns/thoughts around the EA community.

  • I strongly agree that the costs of intertwining the professional and personal spheres require more careful thought—particularly re: EA hubs and student groups. The epistemic costs seem most important to me here: how can we minimize social costs for 'going against' a belief(s) held by the majority of one's social circle?
  • I think more delineation would be helpful, particularly between the effective giving and cause incubation approaches of EA. I would
... (read more)

Thanks, this makes sense! Yeah, this is why many arguments I see start at a more abstract level, e.g.

  • We are building machines that will become vastly more intelligent than us (c.f. superior strategic planning), and it seems reasonable that then we won't be able to predict/control them
  • Any rational agent will strategically develop instrumental goals that could make it hard for us to ensure alignment (e.g., self-preservation -> can't turn them off)
1
Brian Lui
2y
I might have entered at a different vector (all online) so I experienced a different introduction to the idea! If my experience is atypical, and most people get the "gentle" introduction you described, that is great news.

This makes a lot of sense, thanks so much! 

I think I agree with this point, but in my experience I don't see many AI safety people using these inferentially-distant/extreme arguments in outreach. That's just my very limited anecdata though.

I'm always keen to think about how to more effectively message EA ideas, but I'm not totally sure what the alternative, effective approach is. To clarify, do you think Nintil's argument is basically the right approach? If so, could you pick out some specific quotes and explain why/how they are less inferentially distant?

Hi, I'm the author of Nintil.com (We met at Future Forum :)

Essentially, an essential rule in argumentation is that the premises have to be more plausible than the conclusion. For many people, foom scenarios, nanotech, etc makes them switch off. 

 

I have this quote

Here I want to add that the lack of criticism is likely because really engaging with these arguments requires an amount of work that makes it irrational for someone who disagrees to engage. I make a similar analogy here with homeopathy: Have you read all the relevant homeopathic literatur

... (read more)
1
Brian Lui
2y
Great! Yes. The key part I think is this: My view is that normal people are unreceptive to arguments that focus on the first three (advanced nanotechnology, recursive self-improvement, superhuman manipulation skills). Leave aside whether these are probable or not. Just talking about it is not going to work, because the "ask" is too big. It would be like going to rural Louisiana and talking at them about intersectionality. Normal people are receptive to arguments based on the last three (speed, memory, superior strategic planning). Nintil then goes on to make an argument based only on these ideas. This is persuasive. The reason is that it's easy for people to accept all three premises: * Computers are very fast. This accords with people's experience. * Computers can store a lot of data. People can understand this, too. * Superior strategic planning might be slightly trickier, but it's still easy to grasp, because people know that computers can beat the strongest humans at chess and go.

Oh, I love(!) this. Really resonates, particularly the idea that feeling like your worth depends on your impact perversely reduces your capacity to take risks (even when the EV suggests that's what you should do).

I feel like this idea of unconditional care has been the primary driver of my evolving relationship with EA. FWIW, I think a crucial complement to this is cultivating the same sense of care for yourself.

Thank you for this, particularly in a way that feels (from someone who isn't quite disillusioned) considerate to people who are experiencing EA disillusionment. I definitely resonate with the suggestions - these are all things I think I should be doing off, particularly cultivating non-EA relationships since I moved to the Bay Area specifically to be in an EA hub.

Also really appreciate your reflection on 'EA is a question' as more of an aspiration than a lived reality. Myself, along with other community-builders I know, would point to that as a 'definition' of EA but would (rightly) come across people who felt like that simply wasn't very representative of the community's culture.

Thanks for this! I'm hoping to start a future-proof personal website + blog and was looking into using Hugo w/ Github pages. What do you think of using static site generators as opposed to, say, Blot?

2
peterhartree
2y
I played around with a few static site generators a couple years ago and was not very impressed. The main reservations I recall (based on my experience a couple years ago) are: 1. Steep learning curve, coding ability required. 2. Static site ecosystem not very mature—easy to burn hours updating dependencies, resolving version conflicts, or adding fairly basic features that other platforms support out of the box. Boring, mature software is a better choice for most people than the cool new thing. 3. Build times can sometimes take minutes, not seconds [1]. 1. The performance argument in favour of static sites is not as good as people think. Anyone who is comfortable editing a Cloudflare configuration can enable Cloudflare APO to make a WordPress blog just as fast as the fastest static site setup. For other blogging platforms, one can dial up the Cloudflare cache settings to get the same effect. ---------------------------------------- 1. Blot, by contrast, takes 1-5 seconds (with no button clicks required) to publish a new post, or publish edits to existing post. ↩︎
2
Gavin
2y
They kick ass (<3 Jekyll) but the learning curve has completely defeated several of my smart nontechnical friends.

So excited you are launching this! Great to see more field-building efforts.

3
Jessica Wen
2y
Thanks Miranda, appreciate all your help when I had a career crisis and for having faith in my community building skills :)

Liked this a lot - reframing the goal of CB as optimizing for high alignment and high competence is useful.

I'm not sure I totally agree, though. I want there to be some EA community-building that is optimizing for alignment but not competence: I imagine this would be focused on spreading awareness of the principles—as there (probably) remains a significant number who may be sympathetic but haven't heard of EA—as well as encouraging personal reflection, application, and general community vibes. I haven't totally let go of the Singer & GWWC vision of spr... (read more)

Thank you for doing this - never thought I wanted this, but I definitely do! I also took notes but very messily, and it's so useful to have a summary (especially for people who haven't read it yet).

Strongly upvoted for fleshing out and articulating specific emotional phenomena that (a) I think drew me to EA and (b) have made it hard for me to actually understand + embody EA principles. I've perused a lot of the self-care tag and I don't think anyone has articulated it as precisely as you have here.

The below quote, in particular, captures a learning that has been useful for me (if still leaning into using impact as a justification/rationale).

Ironically, having your impact define your self-worth can actually reduce your impact in multiple ways

4
Ada-Maaria Hyvärinen
2y
I'm really glad this post was useful to you :) Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you'd like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.

I really appreciate the sentiment behind this - I get the sense that working on AI safety can feel very doom-y at times, and appreciate any efforts to alleviate that mental stress.

But I also worry that leaning into these specific reasons may lead to intellectual blindspots. E.g., Believing that aligned AI will make every other cause redundant leads me to emotionally discount considerations such as temporal discount rate or tractability. If you can justify your work as a silver bullet, then how much longer would you be willing to work on that, even when it... (read more)

I found the concrete implications distinguishing between this more cause-oriented model of EA really useful, thanks!

I also agree, at least based on my own perception of the current cultural shift (away from GHD and farmed animal welfare, and towards longtermist approaches), that the most marginally impactful meta-EA opportunities might increasingly be in field-building.

Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).

This line in particular feels very EA: 

As Ami Vora writes, “It’s not prioritization until it hurts.” 

I really respect your drive in leading this project and was excited to read all the updates!

Would love to view the EA office space design database too : )

I love that you wrote this because I grappled with a slightly bigger version of this, which was 'move to the Bay,' and I wasn't able to get a detailed theory of change from the people who were recommending this to me.

I think point 4 is especially interesting and something that motivated my decision to move (essentially, 'experience Berkely EA culture'). Ironically, most people focused on the first three points (network effects). I do think I'm unsure whether point 4 (specifically, the shift towards maximization, which feels related to totalising EA) is a n... (read more)

Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.

This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!

*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to

(1) find the right contexts/prob... (read more)

This was interesting, thanks! I haven't heard of Mastermind Groups before but in general, I'm excited about trialling more peer-support interventions. This is the approach I took with UChicago EA's career planning program,* which was in turn inspired by microsolidarity practices. I think these interventions provide a useful alternative to the more individual-focused approaches such as 1:1s, 80k career advising, and one-off events.

*It's worth nothing that this one iteration did update me towards , "selection is important," which seems similar to what Steve ... (read more)

Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )

Thank you for this - I found it at least as useful as Luisa's (fantastic) post. : ) 

I teared up reading this, mostly because I felt really validated in how I've slowly been tackling my imposter syndrome (getting feedback, reminding myself not to focus on comparisons, focusing on better mapping the world and not making useless value judgments). I also happen to think that you are a wonderful member of the EA community, who is doing good work with the Forum, so this nudges me towards thinking that if really cool people feel this way, maybe I can be a really cool person too!

Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"

edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role. 

But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an indivi... (read more)

A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.

I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.

But not nice enough to sacrifice impa... (read more)

6
Kirsten
2y
One approach I found really helpful in transitioning from asking a manager to making my own strategic decisions was going to my manager with a recommendation and asking for feedback on it (or, failing that, a clear description of the problem and any potential next steps I can think of, like ways to gain more information). This gave me the confidence to learn how my organisation worked and know I had my manager's support for my solution, but pushed me to develop my own judgment.
3
Miranda_Zhang
2y
Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?" edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role.  But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an individual, can take responsibility for) and world impact (how much good this creates for the world) converges. In this case, there's no such thing as "strategic enough" - if my comparative advantage doesn't lie in strategy, that doesn't mean I'm not "strategic enough" because I was never 'meant to' be in strategy anyway!  So the question isn't, "Am I strategic enough?" But rather, "Am I more suited for strategy-heavy roles or strategy-light roles?"

I feel like this post relies on an assumption that this world is (or likely could be) a simulation, which made it difficult for me to grapple with. I suppose maybe I should just read Bostrom's Simulation Argument first.

But maybe I'm getting something wrong here about the post's assumptions?

1
prattle
2y
I think the excerpt is getting at "maybe all possible universes exist (no claim about likelihood made, but an assumption for the post), then it is likely that there are some possible universes -- with way more resources than ours -- running a simulation of our universe. the behaviour of that simulated universe is the same as ours (it's a good simulation!) and in particular, the behaviour of the simulations of us are the same as our behaviours. If that's true, our behaviours could, through the simulation, influence a much bigger and better-resourced world. If we value outcomes in that universe the same as in ours, maybe a lot of the value of our actions comes from their effect on the big world". I don't know whether that counts as the world likely could be a simulation according to how you meant that? In particular, I don't think Wei Dai is assuming we are more likely in a simulation than not (or, as some say, just "more in a simulation than not").

Really fantastic. Feels like this could be the new 'utopia speech!'

Thanks for this! This is exactly the kind of programming I was thinking of when I reflected on the personal finance workshop I ran for my group.

Question - what leads you to think the below?

The happiness course increased people’s compassion and self-trust, but it may have reduced the extent to which they view things analytically (i.e. they may engage more with their emotions to the detriment of their reason).

1
Fergus
2y
This might not be well-founded at all, and it might well (and could even likely) be the case that higher levels of happiness lead to clearer thinking. I suppose I was thinking about a bit of a dichotonomy between analytical, focussed attention and expansive awareness (while I appreciate that this is an oversimplication, something like the distinction between 'left-brain thinking' and 'right-brain thinking').  My understanding is that''left-brain thinking' can contribute to anxiety and cause one to be overly critical of oneself, but can also facilitate critical thinking regarding whether e.g. a cause area which seems noble is relatively more important. 'Right-brain thinking' might facilitate greater creativity and imagination (for which this course would, I imagine, be helpful), but may lead one to be less analytically rigorous.

I think there's room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with "the flourishing of moral agents in general, rather than just future generations of people." My belief largely draws from one of Michael Aird's posts.

This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity's potential. This includes both our values⁠—which could lead to wanting to protect alien life, if we consider them ... (read more)

Thanks for writing this up - I definitely feel like the uni pipeline needs to flesh out everything between the Intro Fellowship and graduating (including options for people who don't want to be group organizers). 

Re: career MVP stuff, I'm running an adaptation of GCP's career program that has been going decently! I think career planning and accountability is definitely something uni groups could do more of.

Hmm. I am sometimes surprised by how often LW posts take something I've seen in other circumstances (e.g., CBT) and repackages it. This is one of those instances - which, to be fair, Scott Alexander completely acknowledges!

I like the reminder that "showing people you are more than just their opponent" can be a simple way to orient conversations towards a productive discussion. This is really simple advice but useful in polarized/heated contexts. I feel like the post could have been shortened to just the last half, though.

Upvoted because I thought this was a novel contribution (in the context of longtermism) and because I feel some intuitive sympathy with the idea of maintaining-a-coherent-identity.

But also agree with other commenters that this argument seems to break down when you consider the many issues that much of society has since shifted its views on (c.f. the moral monsters narrative).

I still think there's something in this idea that could be relevant to contemporary EA, though I'd need to think for longer to figure out what it is. Maybe something around option valu... (read more)

Thanks for synthesizing a core point that several recent posts have been getting at! I especially want to highlight the importance of creating a community that is capable of institutionally recognizing + rewarding + supporting failure.

What can the EA community do to reward people who fail? And - equally important - how can the community support people who fail? Failing is hard, in no small part because it's possible that failure entails real net negative consequences, and that's emotionally challenging to handle.

With a number of recent posts around failure... (read more)

I actually prefer "scale, tractability, neglectedness" but nobody uses that lol

Load more