Xaq is a high school senior at Lehman Alternative Community School in Ithaca, NY, USA. While Xaq hasn't started regularly donating to charity yet, they plan to donate a significant portion of their future income to effective charities. Xaq doesn't yet have any income. Xaq is trying to increase excitement about effective altruism ideas at their school.
I'm looking for advice on how to spread effective altruism messaging in my high school. Further, I'm looking for advice on how to prepare for an effective altruist lifestyle in my future, including effective career, college, and giving advice.
I'm happy to help others, but I have no expertise outside of reading books on effective altruism.
Firstly, I think this policy could benefit from some increased specificity. For instance, to my knowledge, there's no explicit list of "main EA causes." The closest available is probably 80,000 Hours' list of the world's most pressing problems, but I don't see any suggestion that this is the official, definitive list of "main EA causes." I prefer this article to specify what constitutes a "main EA cause" for the purposes of restriction to Personal Blog posts. Further, how "tenuous" would a policy connection to a main EA cause have to be for it to be restricted to Personal Blog posts? The example "What John Smith’s position on gun rights means for EA voters" is obviously tenuously connected, but some policies may be considered by some to be highly effective or even vital to helping humans, non-human animals, or future generations, even if not directly related. For instance, there are those (I don't mean to imply I am or am not one of them) who believe that animal exploitation is inseparable from the capitalist system. A non-mainstream economic policy may be advocated for on these or other EA cause-related grounds, despite being only indirectly related.
Secondly, I fear this policy severely limits EA's ability to discuss a huge range of issues merely because their content is political (in the context described), which may fit the criteria of being highly effective for additional people to be working on or funding. Here's an example of an argument advocating for additional attention to US elections (specifically the 2024 election), but I'm aware of other examples. I believe that EA missed a big chance to contribute significantly to the EA cause areas, with the relative lack of attention and sidelining of this issue (as far as I know, EA didn't organize to work on this issue in any meaningful fashion). Restricting discussion about relatively inconsequential policies (relative to EA cause areas, which is a very high bar by design) makes sense to me, but limiting discussion about action related to EA cause areas because of a given action's political nature (e.g., advocating to vote for candidate C, which in turn advocates for EA-related policy P) seems to potentially cause missed opportunities for doing good effectively.
It's also important to note that there's a possibility that the EA community is generally overlooking a specific (potentially policy) area with a potential for good that's similar to the potential of EA cause areas. This policy could be helping to sideline this hypothetical area.
I understand that political discourse is often polarizing, but I've found the EA community to be an intellectual, respectful, and emotionally restrained (when appropriate, such as on this forum) one. While I strongly agree that "partisan political discussion tends to have a strong polarizing effect on public forums," I don't think this applies very strongly to this forum specifically, given the focused and responsible (in my experience) nature of this community. If you have evidence that suggests otherwise (specifically in the context of this particular forum), I'd be interested in seeing and potentially revising my opinion on the matter, but I've yet to see substantial evidence of this kind.
I propose that moderation allows a trial period of allowing political posts, such as those described, to be viewed on the Frontpage (if this hasn't already been tried). I'd expect some emotionally-charged or over-attended discussion, but I'd expect the amount of this to be manageable for moderation. Further, I think the slight increase in unhelpful discussion is worth the reduction in the likelihood that this policy is preventing effective action. Moderation can use the data gathered to determine whether we want a permanent revision to this policy.
TL;DR: I don't think there's sufficient evidence to make such a claim.
It could go either way, but because the statement is phrased positively, I disagree. I think it's more likely to improve the conditions of non-human animals than not, because I think it may accelerate lab-grown meat (dairy, eggs, etc.) technology to the point where it becomes cheaper than farmed meat, in which case the conditions of animals will considerably improve. However, if this doesn't occur, AGI could further increase animal farming and efficiency, considerably worsening the conditions of non-human animals. While in the past, technology heralded expansions of the moral sphere, I think the case of non-human animals may be psychologically very difficult in comparison. If AGI is independent of humans but still aligned with their interests (which is itself improbable), we could see it trying to improve the conditions of non-human animals for the same reasons we do, but this is probably unlikely. Of course, the proposition makes no position on the likelihood that it will go well for animals, human or otherwise. It's unlikely AGI will significantly improve the conditions of wild animals, as humans aren't likely to have the incentive to, and alignment with human interests means that the AGI probably won't either.