TruePath

Wiki Contributions

Comments

The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit - or "Do we discount the future only because we won't live in it?"

I'm not sure I complete followed #1 but maybe this will answer what you are getting at.

I agree that the following argument is valid:

Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.

However, I still don't think you can make the argument that I can't think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn't apply discounting when evaluating what is morally preferable.  And I think this substantially reduces just how compelling the point is.  I mean I do lots of things I'm aware are morally non-optimal.  I probably should donate more of my earnings to EA causes etc.. etc.. but sometimes I choose to be selfish and when I consider cryonics it's entirely as a selfish choice (I agree that even without discounting it's a waste in utilitarian terms).

(Note that I'd make a distinction between saying something is morally favorable and that it is bad or blameworthy to do it but that's getting a bit into the weeds).

—-

Regarding the theoretical problems I agree that they aren't enough of a reason to accept a true discounting rate.  Indeed, I'd go further and say that one is making a mistake to infer things about what's morally good because we'd like our notion of morality to have certain nice properties.  We don't get to assume that morality is going to behave like we would like it to …we've just got to do our best with the means of inference we have.

The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit - or "Do we discount the future only because we won't live in it?"

I ultimately agree with you (pure time discounting is wrong…even if our increasing wealth makes it a useful practical assumption) but I don't think you're argument is quite as strong as you think (nor is Cowan's argument very good).

In particular, I'd distinguish my selfish emotional desires regarding my future mental states from my ultimate judgements about the goodness or badness of particular world states.  But I think we can show these have to be distinct notions[1].  Someone who was defending pure time discounting could just say: well while, as far as my selfish preferences go, I don't care whether I have another 10 happy years now or in 500 years it's nevertheless true that morally speaking the world in which that utility is realized now is much better than the one it is realized later.  

This is also where Cowan's argument falls apart.  The pareto principle is only violated if a world in which one person is made better off and everyone else's position is unchanged isn't preferable to the default.  But he then makes the unjustified assumption that Sarah isn't 'made worse off' by having her utility moved into the future.  But that just begs the question since,if we believe in pure time discounting, Sarah's future happiness really is worth only a fraction of what it would be worth now.  In other words we are just being asked to assume that only Sarah's subjective experience and not the time at which it happens affect her contribution to overall utility/world value.

Having said all this, I think that every reason one has for adopting something like utilitarianism (or hell any form of consequentialism) screams out against accepting pure time preferences even if not formally required.  The only reason people are even entertaining pure discounting is that they are worried about the paradoxes you get into if you end up having infinite total utility (yes,  difficulties remain even if you just try and directly define a preference relation on possible worlds)
—-

^1: I mean your argument basically assumes that, other things being equal, if a world where my selfish desires are satisfied is better than one in which it is not.   While that is a coherent position to hold (it's basically what preference satisfaction accounts of morality hold) it's not (absent some a priori derivation of morality) required.

For instance, I'm a pure utilitarian so what I'd say is that while I selfishly wish to continue existing I realize that if I suddenly disappeared in a poof of smoke (suppose I'm a hermit with not affected friends or relatives) and was replaced by an equally happy individual that would be just as good a possible world as the one in which I continued to exist.

 

This Can't Go On

Could you provide some evidence that this rate of growth is unusual in history?   I mean it wouldn't shock me if we looked back at the last 5000 years and saw that most societies real production grew at similar rates during times of peace/tranquility but that resulted in small absolute growth that was regularly wiped out by invasion, plague or other calamity.  In which case the question becomes whether or not you believe that our technological accomplishments make us more resistant to such calamities (another discussion entirely).

Moreover, even if we didn't see similar levels of growth in the past there are plenty of simple models which explain this apparent difference as the result of a single underlying phenomenon.  For instance, consider the theory that real production over and above subsistence agricultural level grows at a constant rate per year.  As this value was almost 0 for the past 5,000 years that growth wouldn't be very noticeable until recently.  And this isn't just some arbitrary mathematical fit but has a good justification, e.g., productivity improvements require free time, invention etc.. etc.. so only happens in the percent of people's time not devoted to avoiding starving.


Also, it's kinda weird to describe the constant rate of growth assumption as business as usual but then pick a graph where we have an economic singularity (flat rate of growth gives a exponential curve which doesn't escape to infinity at any finite time).  Having said all that, sure it seems wrong to just assume things will continue this way forever but it seems equally unjustified to reach any other conclusion.

The Importance of Truth-Oriented Discussions in EA

 I thought the archetypal example was where everyone had a mild preference to be with other members of their race (even if just because of somewhat more shared culture) and didn't personally  really care if they weren't in a mixed group.   But I take your point to be that, at least in the gender case, we do have the preference not to be entirely  divided by gender.

So yes, I agree that if the effect leads to too much sorting then it could be bad but it seems like a tough empirical question whether we are at a point where the utility gains from more sorting are more or less than the losses.  

Utilitarianism Symbol Design Competition

Could you say a bit more about what you want this flag to symbolize/communicate?  Flags for nations need to symbolize what holds the members of that country together and unifies them but, when it comes to an idea, it seems the flag is more a matter of  what you want to communicate to others about the virtues of your idea.   I mean I'm having trouble imagining that a utilitarian flag could do $1000 worth of good unless it does some important PR work for utilitarianism.

If it was me I'd be trying to pick a flag to communicate the idea that utilitarianism is (or well a natural consequence of/close too) universal love/empathy/concern.  My sense is that opposition to utilitarianism seems frequently rooted in this idea that it's cold, uncaring calculation.  But since you are the one putting up the money maybe you can lay out a bit more what you want to communicate and what use you see this flag being put to.

Integrity for consequentialists

Re your first point yup they won't try to recruit others to that belief but so what? That's already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.

Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize utility. So maybe the better way to phrase this is as: why care what the agent who pledges to utilitarianism in some way and wants to recruit others might need to do or act that's a distraction from the simple question of what in fact maximizes utility. If that means convincing everyone not to be utilitarians then so be it.

--

And yes re the rest of your points I guess I just don't see why it matters what would be good to do if other agents respond in some way you argue would be reasonable. Indeed, what makes consequentialism consequentialism is that you aren't acting based on what would happen if you imagine interacting with idealized agents like a Kantianesque theory might consider but what actually happens when you actually act.

I agree the caps were aggressive and I apologize for that and I agree I'm not trying to produce evidence which says that in fact how people respond to supposed signals of integrity tends to match what they see as evidence you follow the standard norms. That's just something people need to consult their own experience and ask themselves if, in their experience, thay tends to be true. Ultimately I think that it's just not true that a priori analysis of what should make people see you as trustworthy or have any other social reaction is a good guide to what they will do?

But I guess that is just going to return to point 1 and our different conceptions of what is utilitarianism requires.

A mental health resource for EA community

Yes and reading this again now I think I was way too harsh. I should have been more positive about what was obviously an earnest concern and desire to help even if I don't think it's going to work out. A better response would have been to suggest other ideas to help but other than reforming how medical practice works so mental suffering isn't treated as less important than being physically debilitated (docs will agree to risky procedures to avoid physical loss of function but won't with mental illness ...likely because the family doesn't see the suffering from the inside but do see the loss in a death so are liable to sue/complain if things go bad).

The Importance of Truth-Oriented Discussions in EA

I apparently wasn't clear enough that I absolutely agree and support things like icebreakers etc. But we shouldn't either expect them to or judge their effectiveness based on how much it increases female representation. Absolutely do it and do it for everyone who will benefit but just don't be surprised if even if we do that everywhere it doesn't do much to affect gender balance in EA.

I think if we just do it because it makes ppl more comfortable without the gender overlay not only will it be more effective and more widely adopted but avoid the very real risk of creep (we are doing this to draw in more women but we haven't seen a change so we need to adopt more extreme approaches). Let's leave gender out of it when we can and in this case we absolutely can because being welcoming helps lots of ppl regardless of gender.

The Importance of Truth-Oriented Discussions in EA

No I didn't mean to suggest that. But I did mean to suggest that it's not at all obvious that this kind of Schelling style amplification of preferences is something that would be good to do something about. The archetypal example of Schelling style clustering is a net utility win even if a small one.

Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal

I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

IMO this concern is more persuasive than the risk of trying Geoengineering.

But I disagree that Geoengineering isnt going to happen soon. All the same reasons we aren't doing anything about global warming now are reasons that we'll flip on a dime when we start seeing real harms.

Load More