All of Gage's Comments + Replies

So glad you're doing this!!

I agree many people believe in the asymmetry, and that is likely one reason people care about animal welfare but not longtermism. However, I think you're conflating a person-affecting view with the asymmetry, which are separate views. I hate to argue semantics here, but the person-affecting view only is concerned only with the welfare of existing beings, not with the creation of negative lives no matter how bad they are. Again, neither of these are my view, but they likely belong to some people.

5
MichaelStJules
4mo
There are multiple views considered "person-affecting views", and I think the asymmetry (or specific asymmetric views) is often considered one of them. What you're describing is a specific narrow/strict person-affecting restriction, also called presentism. I think it has been called the person-affecting view or the person-affecting restriction, which is of course confusing if there are multiple views people consider person-affecting. The use of "person-affecting" may have expanded over time.
6
Dustin Crummett
4mo
They are separate views, but related: people with person-affecting views usually endorse the asymmetry, people without person-affecting views usually don't endorse the asymmetry, and person-affecting views are often taken to (somehow or other) provide a kind of justification for the asymmetry. The upshot here is that it wouldn't be enough for people at OP to endorse person-affecting views: they'd have to endorse a version of a person-affecting view that is rejected even by most people with person-affecting views, and that independently seems gonzo--one according to which, say, I have no reason at all not to push a button that creates a trillion people who are gratuitously tortured in hell forever. Very roughly, how this works: person-affecting views say that a situation can't be better or worse than another unless it benefits or harms someone. (Note that the usual assumption here is that, to be harmed or benefited, the individual doesn't have to exist now, but they have to exist at some point.) This is completely compatible with thinking it's worse to create the trillion people who suffer forever: it might be that their existing is worse for them than not existing, or harms them in some non-comparative way. So it can be worse to create them, since it's worse for them. And that should also be enough to get the view that, e.g., you shouldn't create animals with awful lives on factory farms. Of course, usually people with person-affecting views want it to be neutral to create happy people, and then there is a problem about how to maintain that while accepting the above view about not creating people in hell. So somehow or other they'll need to justify the asymmetry. One way to try this might be via the kind of asymmetrical complaint-based model I mentioned above: if you create the people in hell, there are actual individuals you harm (the people in hell), but if you don't create people in heaven, there is no actual individual you fail to benefit (since the potential

Very good points made! One objection I think you didn’t mention that might be on OP’s mind in neartermist allocations has to do with population ethics. One reason many people are near termist is because they subscribe to a person-affecting view whereby the welfare of “merely potential” beings does not matter. Since basically all animal welfare interventions either 1. Cause fewer animals to exist, or 2. Change welfare conditions for entire populations of animals, it seems extremely unlikely the animals who would otherwise have lived the higher suffering liv... (read more)

Generally, people with person-affecting views still want it to be the case that we shouldn't create individuals with awful lives, and probably also that we should prefer the creation of someone with a life that is net-negative by less over someone with a life that is net-negative by more. (This relates to the supposed procreation asymmetry, where, allegedly, that a kid would be really happy is not a reason to have them, but that a kid would be in constant agony is a reason not to have them.) One way to justify this would be the thought that, if you don't c... (read more)

3
Ariel Simnegar
4mo
Thanks Gage! That's a good point I hadn't considered! I don't think that's OP's crux, but it is a coherent explanation of their neartermist cause prioritization.

re conflict of interest concerns: I'd go ahead and apply for both membership as a donor and for funding for your project. Others may do the same. We are still considering how to weigh COI concerns against potential value of allowing those applications in, and plan to arrive at a more conclusive policy before deciding whether or not to admit members/applications. 

Re referring other applications: similar to above, though I think there's less conflict here and would perhaps unfairly limit many good applications if we excluded merely because you are well ... (read more)

2
Vincent van der Holst
8mo
All clear Gage, thanks! Our application is in and I'll send the opportunity to some who I think should apply. I'll also apply as a member. I believe I'll have poor judgement on some areas (and be clear about that) but might add something meaningful to others like business models, economic viability and marketing.

We are connected to Nonlinear Network, but there is currently no interaction with them at least for this first round. We may interact with them in the future after our first round and depending on how well-aligned our goals end up being.

ok tank you for clarification, I think that makes sense

I'm curious why you and many EA's who focus on longtermism don't suggest donating to longtermist cause areas (as examples often focuses on Givewell or ACE charities). It seems like if orgs I respect like Open Phil and long term future fund are giving to longtermist areas, then they think that's among the most important things to fund, which confuses me when I then hear longtermists acting like funding is useless on the margin or that we might as well give to GiveWell charities. It gives me a sense that perhaps there's either some contradiction going on, or... (read more)

6
Benjamin_Todd
2y
I don't mean to imply that, and I agree it probably doesn't make sense to think longtermist causes are top and then not donate to them. I was just using 10x GiveDirectly as an example of where the bar is within near termism. For longtermists, the equivalent is donating to the EA Long-term or Infrastructure Funds. Personally I'd donate to those over GiveWell-recommended charities. I've edited the post to clarify.

While I agree there is a good signaling benefit, I think you need to be a bit more rigorous in figuring out just how impactful it is, and what the ultimate goal of signaling is. Taking your $100k/year example with Givewell's ~$5k/life saved, that'd mean that the signaling of one's donations below this amount are better than saving about 20 lives. That doesn't seem right to me... And if you think that signaling is valuable for community building, it's probably way more effective to just donate to community building (e.g. EA infrastructure fund) than anythin... (read more)

I am now starting a book giveaway based on these data/arguments at my EA Austin fellowship, and I managed to buy a bunch of used versions of EA books for an average of $6-7 per book including shipping, so with a bit of thrift you can get some great deals! Amazon and eBay at least in US make buying used books quite cheap 

I have an idea to increase EA donation matches on Facebook's Giving Tuesday, and I want your feedback! PLEASE FILL OUT THIS GOOGLE FORM here https://forms.gle/Kb9ieaN8ZvxBkrZz6 and leave a comment after reading this.

I am considering creating and distributing an automated tool that can schedule and execute donations immediately when matching begins, even if the donor is asleep or AFK. Last year, about 50% of the $1.2m+ donated by EA's was matched. With this tool, this percentage could easily approach 100% (increasing match funds to EA causes by $... (read more)

2
Aaron Gertler
4y
There's been some discussion of this post on Facebook. I'm adding the link here so that there don't end up being two separate conversations where people aren't aware of the other one.