Ramiro

Brazilian legal philosopher and financial supervisor

Wiki Contributions

Comments

Low-Hanging (Monetary) Fruit for Wealthy EAs

Thanks for the post. I agree EAs should have lower decreasing marginal utility for money, since they can never be satisfied by it - as you can always help someone else.
On the other hand, I'm not sure you invoke the best examples. First, LTCM collapsed in 1998 (despite being managed by genius economists) and had a bad effect on financial markets; this shows that trying to earn a lot of money entails risks and externalities.
Second, I'm not sure what's your source for this premise:


Ordinary wealthy people don't care as much about getting more money because they already have a lot of it

A possible source is Kahneman & Deaton, but if that's the case, this paper: a) has been criticized by more recent studies, and b) is not focused on very wealthy people, which are a very special class of individuals. Actually, I'd say that people who become really wealthy (by themselves) already tend to have lower diminishing marginal utility for money - or they would work so hard to do so.

On famines, food technologies and global shocks

That's true. It also ocurred to me after I posted it here. Irish population declined steadly after 1840s (6.5 mi), long into 1960s (2.8 mi).

Major UN report discusses existential risk and future generations (summary)

Thanks for the post.
I still think longtermist cause areas are often a bit more neglected than "presentist" causes, but I guess this points to a need to revise ITN assessments accordingly, doesn't it?

Noticing the skulls, longtermism edition

I'd like to see how this "skull critique" will develop now that UN has adopted a kind of longtermist stance.

What are some moral catastrophes events in history?

Thanks for sharing this question with us. This is a very interesting idea, and it’s good that someone pursues it.

Plus, my suggestions:

  1. The Better Angels of our Nature, by Steven Pinker - particularly Ch. 4, on the “Humanitarian Revolution”. This is Pinker's book I enjoyed most; I thought it'd be a bit long when I bought it, but in the end I was complaining it was too short.
  2. Turchin’s Seshat database – the “Global History Database”. Btw, I guess Turchin’s mathematical approach to history may interest you, if you’re not acquainted with it yet. Besides, I notice there’s a correlation between some atrocities in White’s book and societal collapses; so perhaps you profit from checking Luke Kemp’s research. Also, if that’s what you’re looking for, studying societal collapses may provide insights for S-risk scholars on what makes unrecoverable dystopias unlikely – in the long run, they’re hard to perpetuate, depend on unstable acceptance, and face stark competition.
  3. I emphasize djbinder's tip on White’s book on atrocities. First, because it’s a good reading, second it helps consider some distinctions (like Lizka did infra) between , e.g., (i) long standing moral practices (like the slave trade - which I think is the point in the post you cite), (ii) "one-shot black swan" massacres which are (usually) quickly perceived as exceptional moral catastrophes (though White shows they happen more than you realize), and (iii) the ominous death toll caused by the side-effects (such as disease and hunger - the Horsemen often ride together) of conflicts, which are usually preventable and neglected. For instance, almost everyone has heard about Rwandan genocide (there's a Hollywood movie about it), a case of (ii), but few people have heard about the millions of deaths in the Congo wars that followed it - a case of (iii).
How would you run the Petrov Day game?

Thanks. So your point is that the "hard part" is to select who's going to receive the codes. It's not an exercise on building trust, but on selecting who is reliable.

How would you run the Petrov Day game?

For me, Petrov's (and Arkhipov's) legacy, the most important lesson, is that, in real MAD life, there should be no button at all.

Seeing Neel  & Habryka's apparent disagreement (the latter seems to think this is pretty hard, while the former thinks that the absence of incentives to press the button makes it too easy), I realize that it'd be interesting to have a long discussion, before the next Petrov Day, on what is the goal of the ritual and what we want to achieve with it.

My point: it's cool practicing "not pressing buttons" and building trust on this, and I agree with Neel we could make it more challenging... but the real catch here is that, though we can bet the stability of some web pages on some sort of Assurance Game, it's a tremendous tragedy that human welfare has to depend on the willingness of people like Petrov to not press buttons. I think this game should be a reminder of that . 

How would you run the Petrov Day game?
  1. We could have a vote on some of those to receive the codes.
  2. There could be some sort of noise - e.g., LW and EA forum websites could have some random moments of instability, so you can't be sure that no one has actually pressed the button.

I came to appreciate the idea of a "ritual" where we just practice the "art of not pressing buttons". And this year's edition got my attention because it can be conceived of as an Assurance Game. Even so, right now, there's no reason for someone to strike - except to show that, even in this low stakes scenarios, this art is harder than we usually think. So there's no trust or virtue actually being tested / expressed here - which makes the ritual less relevant than it could be.

Honoring Petrov Day on the EA Forum: 2021

oh crap! I accidentally pressed the button :O I'm super sorry

[Link post] Sam Scheffler: Conservatism, Temporal Bias, and Future Generations

3) "Practical issues" with utilitarianism vs. "ontological" concerns with value


I can make sense of the notion of something like "a community of rational agents" or "sentient beings", and I can see why I value principles coming from this notion; but I'm not sure what a POVU can mean. This is not an issue about abstraction per se. (I’m sorry, this is gonna be even more confusing than the previous comments, but I believe this very discussion is entangled in too many things, not just my thoughts.)

First, you have some issues concerning decision theory: I don’t know what sort of agent, preferences and judgments figure in the POVU; also, if the universe is infinite, the POVU may result in nihilistic infinite ethics. There are many proposals to avoid these obstacles, though.

I think the overall issue is that, even if you can make sense of POVU, it’s underspecified – and then you have to choose a more “normal” POV to make sense of it (the “abstract communities” I quoted above).

To see how this is different from “practical concerns”, take Singer’s mom example: I can totally understand that he spends more resources on his mother than on starving kids. On the other hand, I could also understand if he acted as a hardcore utilitarian. I'd find it a bit alien, but still rational and certainly not plain wrong; the same if you told me that someone else, in a different society far away from here, 500 years into the past or the future, had let their elders die to save strangers.

Now let’s do some sci-fi: I'd act very differently if you told me that a society had built a Super AI, the God Emoji, to turn their cosmic endowment into something like the "minimal hedonic unit" - see this SMBC strip. Or, to draw from another SMBC strip, if a society had decided to vanish from the Earth to get into a hedonic simulation. I think this would be a tragedy and a waste. (And that Aaron should declare SMBC comics hors concours for the EA Forum creative prize.) However, I'm not sure the world in My little pony: Friendship is optimal, or the hedonist aliens in Three worlds collide, would be equally a waste - even though I don't want any of that for our descendants.

But I don't think even these examples picture something like "the POV of the universe"; I think they try to capture a conception of what the POV of sentient life, or the POV of all rational beings, could be… But these notions are more “parochial” than philosophers usually admit - they still focus on a community of beings doing the evaluation. If that’s the case, though, you could think about some hard constraints on your population axiology – concerning the “minimal status” of the members of the community I (or any other agent in our decision problem) want to belong to. In some sense, the sci-fi examples above are "wrong" to me: I can be in no "community" with the "pleasure structures" of the God Emoji; and I don't think the "community" I'd form with the hedonist aliens would be optimal.

Maybe I'm being biased… but it's hard for me to avoid something like that when I think about what policies and values I'd want for the longterm future (I guess that’s why we would need some sort of Long Reflexion). I want our descendants to be very different from me, even in ways I'd find strange, just like Aristotle would likely find my values strange… and yet I think of myself (and them) as sharing a path with him, and I believe he could see it this way, too. So I believe Scheffler has a point here: it’s still me doing a good deal of the valuing. I think it's way less conservative than what he thinks, though.

Load More