Thanks for the post — I only sort of skimmed the post and comments, and crucially I don't think this is what your post is really about, but it seems like you have the view that we're kinda clueless about whether factory farmed animals have good or bad lives. In reference to this, you mention in a comment: "It's hard to be confident of any view on this, when we understand so little about consciousness, animal cognition, or morality."
As an aside, the term "factory farmed animals" is kind of weird category that includes both cows and chickens (among other animals). You could plausibly make the case that cows have net positive lives, but it seems pretty difficult to say the same for chickens.
Sure, we don't understand everything and everything about morality, but given the evidence we do have with regards to animal suffering and a few other basic axioms and intuitions, it seems hard to put this at 50:50 or similar. There are a bunch of arguments in favor of factory farmed chickens having bad lives, and I'm not aware of many arguments saying that they have positive lives. I think the Holocaust case is interesting but a bit confusing because those people had (probably) happy/positive lives before the Holocaust, and could have had happy/positive lives if they had been released. If someone were to intentionally breed humans into existence in order to place them into concentration camps (and later kill them), I think most plausible ethical theories would consider this to be uncontroversially bad.
I think this is broadly fair, and perhaps a reframing of “think more actively about your interests” would be better than just “think more actively about your career” for many readers.
That said, I think for a lot of people, what they’re immediately excited about doesn’t line up well with what might be good for their career, especially if they’re trying to do good. I worry that “keep noticing what excites you and find ways to do more of that” would lead some people down career paths with little impact, whilst also making it hard to transition to high impact roles in the future. I also suspect that many people’s passions are more flexible than they might expect, and that without careful planning, they may narrow down their options unnecessarily.
Yeah — this seems pretty reasonable to me. I'd not thought about this explicitly before, but the rough numbers/boundaries you provide seem quite plausible!
When (if ever) will marijuana be legal for recreational use, or effectively so, across all 50 US states?
Thanks Max! I too am not certain that this is the correct approach, and think there is a good case for longer form conversations due to the reasons you give. The rough case I'd make for the "maximizing" approach is:
1. It's easy to scale: You can easily gather 5-10 members of your group, give them 10-15 minutes of guidance and put them on the stall. I slightly worry about group members who are newer to EA having long form on-boarding conversations with new and interested people (in EA Oxford, we've previously taken some time to verify that people are knowledgable enough to have formal 1-1 conversations with newcomers).
2. Activities fairs are often noisy and as such don't represent the best environment to engage in long form conversations.
3. Even if you do have long form conversations at the stall, they likely won't last longer than 5-10 minutes, which I think is generally not enough time for someone to properly understand what EA is. Often, when engaging in longer conversations at activity fairs, I've observed people come across as somewhat skeptical of EA, but in such a way that upon further reflection I could imagine them being reasonably excited about it. As such, it may be better to optimize for driving attendance at longer form events, such as a 1-1 coffee chat or a 1-hour introductory talk.
I agree that this approach could come across as unfriendly, and that it's important to make sure stall-runners are aware of this. Overall, I see this as a downside, but one that is probably worth it in the long run.
I remember some suggestions a while back to store the EA funds cash (not crypto) in an investment vehicle rather than in a low-interest bank account. One benefit to this would be donors feeling comfortable donating whenever they wish, rather than waiting for the last possible minute when funds are to be allocated (especially if the fund manager does not have a particular schedule). Just wondering whether there's been any thinking on this front?
I'm wondering how you see 1FTW's position changing due to the presence of OpenPhil and a shift towards a more money rich, talent poor community (across certain cause areas)?
In my eyes, the comparative advantage for student groups is more about driving engagement and plan changes and less about raising funds. Of course, money still goes a long way, but I'm skeptical that group leaders should be spending their time focusing on (relatively) small donations over building communities of talented, engaged individuals.
Is your view that 1FTW will be a better outreach vehicle (than standard community building techniques) for certain demographics? It seems that 1FTW attracts similar types of people that the GWWC pledge would, but at higher quantities due to the lower barrier. However, I'm skeptical that this lower barrier is necessarily a positive thing, because it would seem that, on average, these individuals are less likely to further engage with the EA community at large.
Is this something you're concerned about, or do you think these concerns are relatively minor?
Ah okay - I think I understand you, but this is entering areas where I become more confused and have little knowledge.
I'm also a bit lost as to what I meant by my latter point, so will think about it some more if possible.
By agentive I sort of meant "how effectively an agent is able to execute actions in accordance with their goals and values" - which seems to be independent of their values/how aligned they are with doing the most good.
I think this is a different scenario to the agent causing harm due to negative corrigibility (though I agree with your point about how this could be taken into account with your model).
It seems possible however that you could incorporate their values/alignment into corrigibility depending on one's meta-ethical stance.
I really liked this post and the model you've introduced!
With regards to your pseudomaths, a minor suggestion could be that your product notation is equal to how agentive our actor is. This could allow us to take into account impact that is negative (i.e., harmful processes) by then multiplying the product notation by another factor that takes into account the sign of the action. Then the change in impact could be proportional to the product of these two terms.