Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:
"The term "longtermism" is misleading because in practice "longtermism" means "concern over short AI timelines", and in fact many "longtermists" are concerned with events on a much shorter time scale than the rest of EA."
I thought that was a surprising and interesting argument, though I don't recall who initially made it. Does anyone remember?
The most important thing in life is to be free to do things. There are only two ways to insure that freedom — you can be rich or you can you reduce your needs to zero. I will never be rich, so I have chosen to crank down my desires. The bureaucracy cannot take anything from me, because there is nothing to take.
Colonel John Boyd
I think this comment, while quite rude, does get at something valuable. There's an argument that goes "hmm, the outside view says this is absurd, we should be really sure of our inside view before proceeding" and I think that's sometimes a bit of a neglected perspective in rationalist/EA spaces.
I happen to know that the inside view on HPMoR bringing people into the community is very strong, and that the inside view on Eli Tyre doing good and important work is also very strong. I'm less familiar with the details behind the other grants that anoneaagain highlighted, but I do think that being aware and recognizing the... unorthodoxy of these proposals is important, even if the inside view does end up overriding that.
I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.
Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.
Just posting to acknowledge that I've seen this - my full reply will be long enough that I'm probably going to make it a separate post.
Neither is poverty alleviation or veganism or anything else in practice.
Again, strong disagree - many things are not politicized and can be answered more directly. One of the main strengths of EA, in my view, is that it isn't just another culture war position (yet?) - consider Robin Hanson's points on "pulling the rope sideways".
You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter.
Sorry, I perhaps wasn't specific enough in my original reply. The "less authoritative" thing was meant to apply to the entire document, not just this one section - that's why I also said I wasn't sure documents like this are good for EA as a movement.
I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games.
Strong disagree. Political policy in practice isn't "just another question to answer" - maybe it should be, but that's not the world we live in - and acting as if it is strikes me as risky.
Like I said, that's not really the point - it also doesn't meaningfully resolve that particular issue, because of course the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don't.
I dunno, maybe I'm overly cautious, but I'm not fond of someone publishing a well-made and official-looking "based on EA principles, here's who to vote for" document, since "EA principles" quite vary - I think if EA becomes seen as politically aligned (with either major US party) that constitutes a huge constraint on our movement's potential.
I don't think there's much practical difference between "intrinsic moral interests" and "intrinsic moral rights", but that's not really the point - it's more that I think given such differences in perspective between EAs, I'm not sure that documents like this are great for EA as a movement. I would at least prefer to see them presented less... authoritatively?
I like that you've put the effort into creating this, but I'm not fond of the background assumptions here - there seem to be some elements that not all EAs might necessarily share. For instance, one section begins "Intrinsic moral rights do not exist" - that's certainly not what I believe and it seems inconsistent with other sections that talk about the "intrinsic moral weight" of animal populations, etc.
While the fact that you've "shown your work" with the Excel spreadsheet helps people evaluate the same issues with different weights, if someone is interested in areas that you've chosen to exclude it's less apparent how to proceed.
I do appreciate the work you've put into this, though!