Brazilian legal philosopher and financial supervisor
You're welcome. Plz, write a post (even if a shortform) about it someday.Something that attracts me in this literature (particularly in Scheffler) is how they can pick different intuitions that often collide with premises / conclusions of reasons based on something like the rational agent model (i.e., VnM decision theory). I think that, even for a philosophical theorist, it could be useful to know more about how prevalent are these intuitions, and what possible (social or psychological) explanations could be offered for them. (I admit that, just like the modus ponens of one philosopher might be the modus tollens of the other, someone's intuition might be someone else's cognitive bias)For instance, Scheffler mentions we (at least me and him) have a "primitive" preference for humanity's existence (I think by "humanity" he usually means rational agents similar to us - being extinct by trisolarans would be bad, but not as bad as the end of all conscious rational agents); we usually prefer that humanity exists for a long time, rather than a short period, even if both timelines have the same amount of utility - which seems to imply some sort of negative discount rate of the future, so violating usual "pure time preference" reasoning. Besides, we prefer world histories where there's a causal connection between generations / individuals, instead of possible worlds with the same amount of utility (and the same length in time) where communities spring and get extinct without any relation between them - I admit this sounds weird, but I think it might explain my malaise towards discussions on infinite ethics.
I was Reading about Meghan Sullivan “principle of non-arbitrariness,” and it reminded me Parfit’s argument against subjectivist reasoning in On What Matters… but why are philosophers (well, and people in general) against arbitrariness? I mean, I do agree it’s a tempting intuition, but I’ve never seen (a) a formal enunciation of what counts as arbitrary (is "arbitrary" arbitrary?), and (b) an a priori argument against. Of course, if someone’s preference ordering varies totally randomly, we can’t represent them with a utility function, and perhaps we could accuse them of being inconsistent. But that’s not what philosophers' examples usually chastise: if one has a predictable preference for eating shrimps only on Friday, or disregards pain only on Thursday, there’s no instability here – you can represent it with a utility function (having time as a dimension).
There isn’t even any a priori feature allowing us to say that is evolutionarily unstable, since this could only be assessed when we look at whom our agent will interact with. Which makes me think that arbitrariness is not a priori at all, of course – it depends on social practices such as “giving reasons” for actions and decisions (i don't think Parfit would deny that; idk about Sullivan). There might be a thriving community of people who only love shrimp on Friday, for no reason at all; but, if you don’t share this abnormal preference, it might be hard to model their behavior, to cooperate with them - at least, in this example, when it comes gastronomic enterprises. On the other hand, if you can just have a story (even if kinda unbelievable: “it’s a psychosomatic allergy”) to explain this preference, it’s ok: you’re just another peculiar human. I can understand you now; your explanation works as a salience that allows me to better predict your behavior.
I suspect many philosophical (a priori-like) intuitions depend on things like Schelling points (i.e., the problem of finding salient solutions for social practices people can converge to) than most philosophers would admit. Of course, late Wittgenstein scholars are OK with that, since for them everything is about forms of life, language games, etc. But I think relativistic / conventionalist philosophers unduly trivialize this feature, and so neglect an important point: whatever counts as arbitrary is not, well, arbitrary – and we can often demonstrate that what we call “arbitrary” is suboptimal, inconsistent with other preferences or intuitions, or hard to communicate (and so a poor candidate for a social norm / convention / intuition).
IMO, the best thing I've seen lately, for technical & non-tech people, would be The Alignment Problem, by Brian Christian (a.k.a. the "most human human")
IMF climate change challenge
"How might we integrate climate change into economic analysis to promote green policies?
To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change."
Congrats! I'll gladly listen to your interview.
I guess you already have a bunch of questions prepared... I have a peculiar curiosity / interest in hearing Sachs talk about how a warmer climate might impact economic development. I think he could summarize his own view, then conflicting opinions, and draw conclusions about future impacts of climate change.
I guess Samuel Scheffler's last book has a little bit of them all (I haven't read it yet). And Korsgaard makes a persuasive Kantian case about the disvalue of human extinction.
Thanks for the post. I'm convinced about the case for extrapolating from UN SDGs to GCRs, and I think stating it explicitly is relevant because attention is a scarce resource: companies and governments often use SDGs as a focal point when they want to signal virtue - public companies might even be required to explicitly state what SDGs they are aiming at in their sustainability reports.
I wonder what other areas have failed to get into SDGs - e.g., there's absolutely no concern for animal welfare, as the goals and targets are explicitly worded in conservationist terms. Most material I've read about this is limited to argue that animal welfare and SDGs are compatible - even this call for papers from MDPI (due on 30 jun), which might interest someone doing research on the area.
Could we have catastrophic risk insurance?
Mati Roy once suggested, in this shortform, that we could have "nuclear war insurance," a mutual guarantee to cover losses due to nukes, to deter nations from a first strike; I dismissed the idea because, in this case, it'd not be an effective deterrent (if you have power and reasons enough to nuke someone, insurance costs won't be among your relevant concerns).
However, I wonder if this could be extrapolated to other C-risks, such as climate change - something insurance and financial markets are already trying to price. Particularly for C-risks that are not equally distributed (eg., climate change will probably be worse for poor tropical countries) and that are subject to great uncertainty...
I mean, of course I don't expect countries would willingly cover losses in case of something akin to societal collapse, but, given the level of uncertainty, this could still foster more cooperation, as it'd internalize and dillute future costs through all participant countries... on the other hand, ofc, any form insurance implies moral hazard, etc. But even this has a bright side, as it'd provide a legit case for having some kind of governance/supervision /enforcement on the subject... I guess I might be asking: Why don't we have a "climate Bretton Woods?"
(I guess you could apply the argument for FHI's Windfall Clause here - it's just that they're concerned with benefits and companies, I'm worried about risks and countries)
Even if that's not workable for climate change, would it work with other risks? E.g., epidemics?
(I think I should have done a better research on this... I guess either I am underestimating moral hazards and the problem of making countries cooperate, or there's a huge flaw in my reasoning here)
Is there anything like a public repository / document listing articles and discussions on social discount rates (similar to what we have for iidm)?
(I mean, I have downloaded a lot of papers on this - Stern, Nordhaus, Greaves, Weitzman, Posner etc. - and there many lit reviews, but I wonder if someone is already approaching it in a more organized way)
I was wondering... We have (private) pension funds for children. Could / should we make it more widespread (maybe even mandatory)? Could we have government-sponsored funds? Parents (with government's help) would save resources in a fund that could only be used by their offspring when they came of age; plus, unlike current pension funds I know, they could be able to use it as collateral, or to pay tuition, or open a business, or maybe even transfer it to another pension fund...For a longtermist, the pros are: it would increase overall savings (does it? or people will just divert resources from other funds?), transfer wealth to new generations (inequality of wealth between generations concerns me almost as much as possible inequalities of political power), improve intergenerational cooperation... Of course, this can be said for sovereign funds, too, but I see there might be some advantage in having individual accounts (so sidestepping things like tragedy of commons). I'm not very confident, though.