Thanks for this - I think this captures a quality (or set of qualities?) that has previously not had so accurate a handle! I think, in many ways, sincerity is the quality that leads people to really 'take seriously' (i.e., follow through in a coherent way) the project of doing good.
I see! Yes, I agree that more public "buying time" interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky "buying time" interventions that are more useful than technical alignment.
To clarify, you think that "buying time" might have a negative impact [on timelines/safety]?
Even if you think that, I think I'm pretty uncertain of the impact of technical alignment, if we're talking about all work that is deemed 'technical alignment.' e.g., I'm not sure that on the margin I would prefer an additional alignment researcher (without knowing what they were researching or anything else about them), though I think it's very unlikely that they would have net-negative impact.
So, I think I disagree that (a) "buying time" (excluding weird pivotal a...
To clarify, I agree that 80k is the main actor who could + should change people's perceptions of the job board!
I find myself slightly confused - does 80k ever promote jobs they consider harmful (but ultimately worth it if the person goes on to leverage that career capital)?
My impression was that all career-capital building jobs were ~neutral or mildly positive. My stance on the 80k job board—that the set up is largely fine, though the perception of it needs shifting—would change significantly if 80k were listing jobs they thought were net negative if they didn't expect the person to later take an even higher-impact role because of the net negative job.
I always appreciate reading your thoughts on the EA community; you are genuinely one of my favorite writers on meta-EA!
woah! I haven't tried it yet but this is really exciting! the technical changes to the Forum have seemed impressive to me so far. I also just noticed that the hover drop-down on the username is more expanded, which is visually unappealing but probably more useful.
I am looking forward to someone creating a wacky dashboard where we can learn who are the most-upvoted but also most-disagreed-with posters on the Forum. If we think EA is getting too insular / conformist, maybe next time instead of a Criticism & Red-Teaming contest, we could give out an EA Forum Contrarianism Prize! :P
Thanks for contributing to one of the most important meta-EA discussions going on right now (c.f. this similar post)! I agree that there should be splinter movements that revolve around different purposes (e.g., x-risk reduction, effective giving) but I disagree that EA is no longer accurately described by 'effective altruism,' and so I disagree that EA should be renamed or that it should focus on "people who want to help the less fortunate (humans or animals) to the best of their abilities, without anyone trying to be convince them that protecting t...
To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?
Why do you think you'd need to "force yourself?" More specifically, have you tested your fit for any sort of AI alignment research?
If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn't the right kind of person to work on technical research ... But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly...
edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!
Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!
However, I'm not really following your argument.
...Several of your arguments are arguments for the view that "intrinsically positive lives do not exist," [...] It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be des
+1, though would it be possible to hide comments that are just tags? If so, I think I'd be weakly positive about this feature.
I really liked reading this, as I think it captures my most recent concerns/thoughts around the EA community.
Thanks, this makes sense! Yeah, this is why many arguments I see start at a more abstract level, e.g.
This makes a lot of sense, thanks so much!
I think I agree with this point, but in my experience I don't see many AI safety people using these inferentially-distant/extreme arguments in outreach. That's just my very limited anecdata though.
I'm always keen to think about how to more effectively message EA ideas, but I'm not totally sure what the alternative, effective approach is. To clarify, do you think Nintil's argument is basically the right approach? If so, could you pick out some specific quotes and explain why/how they are less inferentially distant?
Hi, I'm the author of Nintil.com (We met at Future Forum :)
Essentially, an essential rule in argumentation is that the premises have to be more plausible than the conclusion. For many people, foom scenarios, nanotech, etc makes them switch off.
I have this quote
...Here I want to add that the lack of criticism is likely because really engaging with these arguments requires an amount of work that makes it irrational for someone who disagrees to engage. I make a similar analogy here with homeopathy: Have you read all the relevant homeopathic literatur
Oh, I love(!) this. Really resonates, particularly the idea that feeling like your worth depends on your impact perversely reduces your capacity to take risks (even when the EV suggests that's what you should do).
I feel like this idea of unconditional care has been the primary driver of my evolving relationship with EA. FWIW, I think a crucial complement to this is cultivating the same sense of care for yourself.
Thank you for this, particularly in a way that feels (from someone who isn't quite disillusioned) considerate to people who are experiencing EA disillusionment. I definitely resonate with the suggestions - these are all things I think I should be doing off, particularly cultivating non-EA relationships since I moved to the Bay Area specifically to be in an EA hub.
Also really appreciate your reflection on 'EA is a question' as more of an aspiration than a lived reality. Myself, along with other community-builders I know, would point to that as a 'definition' of EA but would (rightly) come across people who felt like that simply wasn't very representative of the community's culture.
Thanks for this! I'm hoping to start a future-proof personal website + blog and was looking into using Hugo w/ Github pages. What do you think of using static site generators as opposed to, say, Blot?
Liked this a lot - reframing the goal of CB as optimizing for high alignment and high competence is useful.
I'm not sure I totally agree, though. I want there to be some EA community-building that is optimizing for alignment but not competence: I imagine this would be focused on spreading awareness of the principles—as there (probably) remains a significant number who may be sympathetic but haven't heard of EA—as well as encouraging personal reflection, application, and general community vibes. I haven't totally let go of the Singer & GWWC vision of spr...
Thank you for doing this - never thought I wanted this, but I definitely do! I also took notes but very messily, and it's so useful to have a summary (especially for people who haven't read it yet).
Strongly upvoted for fleshing out and articulating specific emotional phenomena that (a) I think drew me to EA and (b) have made it hard for me to actually understand + embody EA principles. I've perused a lot of the self-care tag and I don't think anyone has articulated it as precisely as you have here.
The below quote, in particular, captures a learning that has been useful for me (if still leaning into using impact as a justification/rationale).
Ironically, having your impact define your self-worth can actually reduce your impact in multiple ways
I really appreciate the sentiment behind this - I get the sense that working on AI safety can feel very doom-y at times, and appreciate any efforts to alleviate that mental stress.
But I also worry that leaning into these specific reasons may lead to intellectual blindspots. E.g., Believing that aligned AI will make every other cause redundant leads me to emotionally discount considerations such as temporal discount rate or tractability. If you can justify your work as a silver bullet, then how much longer would you be willing to work on that, even when it...
I found the concrete implications distinguishing between this more cause-oriented model of EA really useful, thanks!
I also agree, at least based on my own perception of the current cultural shift (away from GHD and farmed animal welfare, and towards longtermist approaches), that the most marginally impactful meta-EA opportunities might increasingly be in field-building.
Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).
This line in particular feels very EA:
As Ami Vora writes, “It’s not prioritization until it hurts.”
I really respect your drive in leading this project and was excited to read all the updates!
Would love to view the EA office space design database too : )
I love that you wrote this because I grappled with a slightly bigger version of this, which was 'move to the Bay,' and I wasn't able to get a detailed theory of change from the people who were recommending this to me.
I think point 4 is especially interesting and something that motivated my decision to move (essentially, 'experience Berkely EA culture'). Ironically, most people focused on the first three points (network effects). I do think I'm unsure whether point 4 (specifically, the shift towards maximization, which feels related to totalising EA) is a n...
Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.
This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!
*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to
(1) find the right contexts/prob...
This was interesting, thanks! I haven't heard of Mastermind Groups before but in general, I'm excited about trialling more peer-support interventions. This is the approach I took with UChicago EA's career planning program,* which was in turn inspired by microsolidarity practices. I think these interventions provide a useful alternative to the more individual-focused approaches such as 1:1s, 80k career advising, and one-off events.
*It's worth nothing that this one iteration did update me towards , "selection is important," which seems similar to what Steve ...
Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )
Thank you for this - I found it at least as useful as Luisa's (fantastic) post. : )
I teared up reading this, mostly because I felt really validated in how I've slowly been tackling my imposter syndrome (getting feedback, reminding myself not to focus on comparisons, focusing on better mapping the world and not making useless value judgments). I also happen to think that you are a wonderful member of the EA community, who is doing good work with the Forum, so this nudges me towards thinking that if really cool people feel this way, maybe I can be a really cool person too!
Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"
edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role.
But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an indivi...
A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.
I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.
But not nice enough to sacrifice impa...
I feel like this post relies on an assumption that this world is (or likely could be) a simulation, which made it difficult for me to grapple with. I suppose maybe I should just read Bostrom's Simulation Argument first.
But maybe I'm getting something wrong here about the post's assumptions?
Thanks for this! This is exactly the kind of programming I was thinking of when I reflected on the personal finance workshop I ran for my group.
Question - what leads you to think the below?
The happiness course increased people’s compassion and self-trust, but it may have reduced the extent to which they view things analytically (i.e. they may engage more with their emotions to the detriment of their reason).
I think there's room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with "the flourishing of moral agents in general, rather than just future generations of people." My belief largely draws from one of Michael Aird's posts.
This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity's potential. This includes both our values—which could lead to wanting to protect alien life, if we consider them ...
Thanks for writing this up - I definitely feel like the uni pipeline needs to flesh out everything between the Intro Fellowship and graduating (including options for people who don't want to be group organizers).
Re: career MVP stuff, I'm running an adaptation of GCP's career program that has been going decently! I think career planning and accountability is definitely something uni groups could do more of.
Hmm. I am sometimes surprised by how often LW posts take something I've seen in other circumstances (e.g., CBT) and repackages it. This is one of those instances - which, to be fair, Scott Alexander completely acknowledges!
I like the reminder that "showing people you are more than just their opponent" can be a simple way to orient conversations towards a productive discussion. This is really simple advice but useful in polarized/heated contexts. I feel like the post could have been shortened to just the last half, though.
Upvoted because I thought this was a novel contribution (in the context of longtermism) and because I feel some intuitive sympathy with the idea of maintaining-a-coherent-identity.
But also agree with other commenters that this argument seems to break down when you consider the many issues that much of society has since shifted its views on (c.f. the moral monsters narrative).
I still think there's something in this idea that could be relevant to contemporary EA, though I'd need to think for longer to figure out what it is. Maybe something around option valu...
Thanks for synthesizing a core point that several recent posts have been getting at! I especially want to highlight the importance of creating a community that is capable of institutionally recognizing + rewarding + supporting failure.
What can the EA community do to reward people who fail? And - equally important - how can the community support people who fail? Failing is hard, in no small part because it's possible that failure entails real net negative consequences, and that's emotionally challenging to handle.
With a number of recent posts around failure...
Thanks a lot for your work on this neglected topic!
You mention,
Could you give more detail on which of the counter-considerations (and motivations) you consider strongest?