Ramiro

Brazilian legal philosopher and financial supervisor

Comments

List of EA-related organisations

Thanks a million for that!

It would be so cool if someone put this on a map...

Can my self-worth compare to my instrumental value?

First, of course, thanks, C Tilli, for the post, and thanks willbradshaw for these comments.
This pierced my mind:

As you say, I'm not sure EA will ever be as comforting as religion – it's optimising for very different things. But over time I hope we will generate community structures and wisdom literature to help manage this tension, care for each other, and create the emotional (as well as intellectual) conditions we need to survive and flourish.

I think my background is the opposite of C Tilli's: I have been an atheist for many years (and still am - well, maybe more of an agnostic, since we might be in a simulation...), but since I found out about EA, I think I became a little bit more understanding towards not only the need for comfort, but also the idea of valuing something that goes way beyond one's own personal value and social circle, that is sought by religious people (on the other hand, I also became a little bit supicious of some cult-like traits we might be tempted to mimic).

I am sort of surprised we wrote so much, so far, without talking about death and mortality. I know I have intrinsic value, but it's fragile and perishable (cryonics aside); and yet, the set of things I can value extends way beyond my perishable self - actually, my own self-worth depends a little bit on that (as Scheffer argues, it'd be hard not to be nihilistic if we knew humanity was going to end after us), and there's no necessary upper bound for what I can value. I reckon that, as much as I fear humanity falling into the precipice, I feel joy by thinking it may continue for eons, and that I may play a role, contribute and add my own personal experience to this narrative.

I guess that's the 'trick' played by religion that might be missing here: religion 'grants' me some sort of intrinsic value through some metaphysical cosmic privilege (or the love of God) - and this provides us some comfort. But then, without it, all that is left, despite enjoyable and worthy, is perishable - transient love, fading joy, endured pain, limited virtue, pleasure... Like Dworkin (who considered this to be a religious conviction - though non-theistic), we can say that a life well-lived is an achievement in itself, and stands for itself even after we die, like a work of art - but art itself will be meaningless when humanity is gone. Maybe altruism is just another way to trick (the fear of) death: when one realizes that "All those moments will be lost in time, like tears in rain. Time to die" one might see it not as realizing some external value, but as an important part of one's own self-worth. (if Bladerunner is too melodramatic, one can use the bureaucrat in Ikiru as an example of this reasoning)

Can my self-worth compare to my instrumental value?

For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.

I think this is still an instrumental reason for someone to place "substantial intrinsic value on themselves." Though I have no problem with that, I thought what C Tilli complained about was precisely that, for EAs, all self-concern is for the sake of the greater good, even when it is rephrased as a psychological need for a small amount self-indulgence.
Second, I'd say that people who are "more successful and have a larger social impact in the long term" are "people who place substantial intrinsic value on themselves,” but that's just selection dynamics: if you have a large impact, then you (likely) place substantial intrinsic value on yourself. Even if it does imply that you’re more likely to succeed if you place substantial intrinsic value on yourself (if only people who do that can succeed), it does not say anything about failure – confident people fail all the time, and the worst way of failing seems to be reserved for those who place substantial value on themselves and end up being successful with the wrong values.

But I wonder if our sample of “successful people” is not too biased towards those who get the spotlights. Petrov didn’t seem to put a lot of value on himself, and Arkhipov is often described as exceptionally humble; no one strives to be an unsung hero.

Evidence on correlation between making less than parents and welfare/happiness?

Though I agree that the marginal utility of income drops a lot after some threshold, and I am not sure about how long people take to adjust their lifestyles to a drop in income, I would like to see a study taking into account the effects of wealth, savings and uncertainty. So yeah, maybe you'll be equally happy if you earn 75k or 100k, but in the latter you'll be better hedged against risks and be able to get additional utility by investing in someone else's welfare (your relatives, or donations).

Timeline Utilitarianism

Thanks for the post. Coincidentally, I was thinking about how I have a strong moral preference for a longer timeline when I saw it.
I feel attracted by total total utilitarianism, but suppose we have N individuals, each living 80y, with the same constant utility U. Now, these individuals can either live more concentrated (say, in 100y) or more scattered (say, in 10000y) in time; I strongly prefer the latter (I'd pay some utility for it) - even though it runs against any notion of (pure) temporal discount. My intuition (though I don't trust it) is that, from the "point of view of nowhere", at some point, length may trump population; but maybe it's just some ad hoc influence of a strong bias against extinction.
Please, let me know about any source discussing this (I admit I didn't search enough for it).

Lumpyproletariat's Shortform

There's some theoretical work on Dominant Assurance Contracts
The nice guy I know in EA who has thought more about that and is quite accessible is Dony Christie.

Ramiro's Shortform

Thanks for this clarifying comment. I see your point - and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa - at least at the same time.
Really, I'm  more puzzled than anything else - and also surprised that I haven't seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn't change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841)  and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)

Finally, really, please, don't take this as a criticism (I'm a major fan of CE), but: 

We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)

First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it'd be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn't know how to compare the hypothetical welfare of the world's super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
 

5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities

Thanks for this. I'm really glad for this milestone, and super proud to be part of it - tbh, it changed my life.
I'd like to see something about trends by year. I remember having read some people concerned that the quantity of new members was decreasing. Maybe, together with other info (e.g., from EA survey), we could have an idea about how EA as a whole tends to evolve.

Ramiro's Shortform

True, thanks.
I inserted a link to the CE's webpage on the Weighted Factor Model

Factors other than ITN?

I've seen people suggest Urgency as an additional dimension. I wonder if anyone has tried to integrate it into an ITN evaluation

Load More