233 karmaJoined Sep 2022Pursuing a doctoral degree (e.g. PhD)


  • Organizer of Tucson Effective Altruism
  • Attended an EAGx conference
  • Received career coaching from 80,000 Hours
  • Attended an EA Global conference
  • Completed the ML Safety Scholars Virtual Program


Minor nitpick: "NEOs (objects smaller than asteroids)"

The definition of NEOs here seems wrong. Wouldn't it be more accurate to call them "Tiny NEOs?" The current definition makes it sound as if asteroids aren't NEOs, but most NEOs are asteroids.

'Hold fire on making projections' is the correct read, and I agree with everything else you mention in point 2. 

About point 1 — I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter aren't even clear. I thought you were stating something akin to "xyz are going to happen," but re-reading your initial post, I believe I misjudged. 

I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/helpful/etc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can. 

On the other hand, I think such takes could also contribute to something I would call a "negative memetic spiral." In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I can't help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly.

It is possible that nothing consequential happens. People's mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/community. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?).

Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the board's decision wasn't over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didn't lead to Altman's ousting. There is also somewhat of a shift back to the "center," at least on Twitter, as there are quite a few reasonable, level-headed takes on what happened and also on EA. I don't know about the mood in the Bay though, since I don't live there.

I am unsure if I am expressing my point well, but this is my off-the-cuff take on your off-the-cuff take.

I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don't want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a "condensed" or "extended" emoji palette? Either way, just my two cents.

Couldn't the comment section under the episode announcement posts (like this one) serve the same purpose? Or are you imagining a different kind of discussion thread here?

The closest would be CEA's communication team, but as you point out: "it’s not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is."

I think it'd be challenging (and not in good taste) for CEA to craft responses on behalf of the entire EA community; it is better if individual EAs critique articles which they think misrepresents ideas within the movement.

I see the same recycled and often wrong impressions of EA far too often, so I appreciate you taking the time and doing this!

Thank you for sharing your impressions! Some comments and questions:

  1. Does longtermist institutional reform count as systemic change?
    1. Meta-question: What is systemic change? How do you define it?

      I think this a term that has become memetically dominant in the Left and has lost its meaning because it is used far too often and casually. So, now whenever people mention that term, I am not quite sure if I know what they mean by it.
  2. I think one speculative reason why longtermist circles don't discuss concerns like the ones you raise is because of a somewhat prevalent belief that the post-scarcity utopia will happen soon after AGI. In a nutshell: AGI will happen very soon, the creation of AGI will lead to ASI (or AGI+) fairly quickly, and if this whatchamacallit is sufficiently aligned, it will solve all our problems.

    Even if an individual somewhat subscribed to this notion, they may not think about most present concerns as they would all seem trivial. After all, they will soon be "solved" in the post-AGI world.[1]
  1. ^

    I don't think professional longtermist organizations operate on this belief or even entertain it.

I wholeheartedly agree with points 2 and 3, but I don't understand point 1. 

I don't know much about Benjamin Lay, but casually glancing through his Wikipedia, it seems that his actions were morally commendable and supererogatory. Is the charge that he could have picked his fights/approach to advocacy more tactfully?

...we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else. 

I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.

Load more