Recent Discussion

Huge thanks to Alex Zhu, Anders Sandberg, Andrés Gómez Emilsson, Andrew Roberts, Anne-Lorraine Selke (who I've subbed in entire sentences from), Crichton Atkinson, Ellie Hain, George Walker, Jesper Östman, Joe Edelman, Liza Simonova, Kathryn Devaney, Milan Griffes, Morgan Sutherland, Nathan Young, Rafael Ruiz, Tasshin Fogelman, Valerie Zhang, and Xiq for reviewing or helping me develop my ideas here. Further thanks to Allison Duettmann, Anders Sandberg, Howie Lempel, Julia Wise, and Tildy Stokes, for inspiring me through their lived examples.

I did not believe that a Cause which stood for a beautiful ideal […] should demand the denial of life & joy. 

– Emma Goldman, Living My Life

This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:

Totalized by an ought, I...

1MalcolmOcean12m
Appreciating this. It's helping me see that part of how I didn't fall deeper into EA than I did is that I already had a worldview that viewed obligations as confused in much the sort of way you describe... and so I saw EA as sort of "to the extent that you're going to care about the world, do so effectively" and also "have you noticed these particular low-hanging fruit?" and also just "here's a bunch of people who care about doing good things and are interesting". These obligations are indeed a kind of confusion—I love how you put it here: I did get infected with a bit of panic about x-risk stuff though, and this caused me to flail around a bunch and try to force reality to move faster than it could. I think Val describes the structure of my panic quite aptly in Here's the exit [https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/here-s-the-exit]. It wasn't a sense of obligation, but it was a sense of "there is a danger; feeling safe is a lie" and this was keeping me from feeling safe in each moment even in the ways in which I WAS safe in those moments (even if a nuke were to drop moments later, or I were to have an aneurysm or whatever). It was an IS not an OUGHT but nonetheless generated an urgent sense of "the world's on fire and it's up to me to fix that". But no degree of shortness to AI timelines benefits from adrenaline—even if you needed to pull an all-nighter to stop something happening tomorrow, calm steady focus will beat neurotic energy. It seems to me that the obligation structure and the panic structure form two pieces of this totalizing memeplex that causes people to have trouble creatively finding good win-wins between all the things that they want. Both of them have an avoidant quality, and awayness motivation is WAY worse at steering than towardsness motivation [https://malcolmocean.com/2022/02/towardsness-awayness-motivation-arent-symmetrical/] . Are there other elements? That seems worth mapping out!

Ah I realized I actually wanted to quote this paragraph (though the one I quoted above is also spot on)

 It made me angry. I felt like I’d drunk the kool-aid of some pervasive cult, one that had twisted a beautiful human desire into an internal coercion, like one for a child you’re trying to get to do chores while you’re away from home.

I felt similarly angry when I realized that my well-meaning friends had installed a shard of panic in my body that made "I'm safe" feel like it would always be false until we had a positive singularity. I had to reclaim ... (read more)

Key Takeaways

  • Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.
  • We take the following ideas to be the strongest reasons in favor of a neuron count proxy:
    • neuron counts are correlated with intelligence and intelligence is correlated with moral weight,
    • additional neurons result in “more consciousness” or “more valenced consciousness,” and
    • increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.
  • However:
    • in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; 
    • many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to
...

Note: When we discuss an organism's "moral weight," ultimately what we mean is whether or not its existence should be prioritized over others' in the event that a choice between one or the other had to be made. "Moral weight," "moral value," and "value," and "worthy of existential priority" are all essentially synonymous.

Which is more likely to contribute a useful idea in preventing the potential apocalyptic scenarios, such as an AI takeover or Yellowstone erupting: a human or a chicken? 

Obviously, a human. 

In addition to the plausibly increased ... (read more)

1Pseudonym1013h
Neuron counts should not be used as a proxy for moral weights. Neurons are biological cells in the nervous system that are responsible for transmitting information, but they are not directly related to moral values or decisions. Moral weights are determined by an individual's values, beliefs, and ideas about what is right and wrong. Therefore, neuron counts cannot be used to accurately measure moral weights.
1Adam Shriver13h
Here's the report on conscious subsystems: https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we [https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we]

Introduction

To help grow the pipeline of AI safety researchers, I conducted a project to determine how demographic information (e.g. level of experience, exposure to AI arguments) affects AI researchers’ responses to AI safety. In addition, I examined additional AI safety surveys to uncover current issues preventing people from becoming AI safety researchers. Specifically, I analyzed the publicly-available data from the AI Impacts survey and also asked AI Safety Support and AGI Safety Fundamentals for their survey data (huge thank you to all three organizations). Below are my results, which I hope will be informative to future field-building efforts. 

This work was done as part of the AI Safety Field-Building Hub; thanks to Vael Gates for comments and support. Comments and feedback are very welcome, and all mistakes are my own. 

TLDR

  • AI
...

Wonderful to see an analysis of all this even more wonderful data coming out of the programs! We're also happy to share our survey results (privately and anonymized) with people interested:

  • Demographic, feedback, AIS attitudes before and after an Alignment Jam (for the first and second versions)
  • A 70 respondent in-depth survey on attitudes towards AI risk arguments (basically, "what converted you")
  • An AI safety pain points survey that was also done in-person with several top AIS researchers
  • An AI safety progress survey about P(doom)s etc.

We also have a few other surveys but the ones above are probably most interesting in the context of this post.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Alternative Title: The Parable of the Crimp

If you watch really proficient rock climbers, you’ll see they can hold themselves up, dozens of feet above the ground, with just the tips of their fingers on the tiniest ledge of rock, about the width of a pencil, called a crimp. If I had not seen it, I would have said it was impossible. When I tried to do it myself, I became convinced that it’s impossible. The feeling! The aching in your fingers, and the awkwardness of the angle tearing at your finger-bones is unbearable. (When I go climbing, I have to hang on to massive, handle-shaped-handles called Jugs which are literally the easiest of the options.)

I think the crimp holds a few valuable lessons. The first is just how tough people can be with...

I'm want to run the listening exercise I'd like to see.

  1. Get popular suggestions
  2. Run a polis poll
  3. Make a google doc where we research consensus suggestions/ near consensus/consensus for specific groups
  4. Poll again

Stage 1

Give concrete suggestions for community changes. 1 - 2 sentences only.

Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.

Agreevote if you think they are well-framed.

Aim for them to be upvoted. Please add suggestions you'd like to see.

I'll take the top 20 - 30

I will delete/move to comments top-level answers that are longer than 2 sentences.

Answer by ElikaDec 06, 202210

There should be alternatives to EAGs/EAGxs - one's that are cause area specific and/or for people interested in EA ideas but not necessarily needing to call yourself an EA.

2Chris Leong1h
I found this description unclear
2Linch4h
Note that the Twitter community is public for reading, just not for posting.

In the EA movement people will sometimes talk about charitable funds as if they are a new idea. For example, the recent Giving What We Can post " why we recommend using expert-led charitable funds" ( forum discussion) opens with:

Funds are a relatively new way for donors to coordinate their giving to maximise their impact.

There's a bit of a cute response that would be fun to write, except that it isn't quite true:

Donating through a fund that is able to put more time and effort into evaluating charitable options is not a new idea, and is very natural if you're thinking along EA lines. In fact, the idea goes back at to the last time people tried to invent effective altruism, 150 years ago. While that effort has been through a few names over time, at this point
...

The analogy here would be if GiveWell, in addition to making grants to other charities, also had their own on the ground bed net distribution program.

2Kirsten7h
Only GiveWell's Unrestricted Fund supports GiveWell's operations https://www.givewell.org/our-giving-funds [https://www.givewell.org/our-giving-funds]
3Jasper Meyer7h
Oops - I realized I understood this incorrectly and have edited my comment. Thank you for the clarification.

This is a free idea that I am not currently working on making happen. If you are interested in trying to make it happen, I highly encourage you to undertake further investigations.

Bottom-Line Up Front

As the number of universities with EA student groups grows, EA should develop a wider variety of standardized activities that such groups can run. Reading groups and fellowships are great, but probably less exciting than ideal. Competitive, student-friendly academic activities (e.g., debate, quiz bowls, STEM olympiads, hackathons, moot courts) are often a fun activity for people interested in a particular area.

Thus the idea for Grantmaking Bowl: A collaborative competition wherein EA student groups are asked to analyze a common set of EA granting case studies across several cause areas (which could be based...

Infinitely easier said than done, of course, but some Shortform feedback/requests

  1. The link to get here from the main page is awfully small and inconspicuous (1 of 145 individual links on the page according to a Chrome extension)
    1. I can imagine it being near/stylistically  like:
      1. "All Posts" (top of sidebar)
      2. "Recommendations" in the center
      3. "Frontpage Posts", but to the main section's side or maybe as a replacement for it you can easily toggle back and forth from
  2. Would be cool to be able to sort and aggregate like with the main posts (nothing to filter by afaik
... (read more)

People react differently when encountering effective altruism for the first time. Some immediately find the ideas appealing and want to learn more, whereas others are much less enthusiastic about them. We suspect that people who immediately find the ideas appealing are more likely to become highly engaged EAs. Let’s call these people proto-EAs. What makes someone a proto-EA?

In a series of surveys, we studied the moral psychological factors that predict immediate interest in effective altruism. We found that the E (“effectiveness-focus”) and the A (“expansive altruism”) are psychologically distinct factors. Both are required to make someone a proto-EA. But only a few people score highly on both. We hope that a deeper understanding of the psychology of proto-EAs can prove practically useful for the community.

Previous research

In previous...

Thank you for researching this; this is incredibly valuable.
I noticed that the OUS-Impartial Beneficence subscale correlates well with expansive altruism and effectiveness focus. Maybe I skipped over it, but did you include in your results whether this OUS subscale had higher predictive power than your two new factors?