All of WolfBullmann's Comments + Replies

You could spotlight people that do good EA work but are virtually invisible to other EAs and do nothing of their own volition to change that, i.e. non paradise birds and non social butterflies.

Some things might need a lot less agreed upon celebration in EA, like DEI jobs and applicants and DEI styled community managenent.

Off the top of my head: The ability to host conferences without angry protesters in front, the chance to be mentioned in a favorable manner by a mainstream major news outlet and willingness of high profile people to associate with EA. Look up what EA intellectuals thought in the near past about why it would be unwise for EA to make too much noise outside the Overton window. This is still valid, except now the Overton has begun to shift at an increasing pace.

Note that this is not meant to be an endorsement of EA aligning with or paying lip service to political trends. I personally believe an increase of enforced epistemic biases to be an existential threat to the core values of EA.

I think when people invoke the term dignity they sort of circumvent describing the issue in actual detail. Most "indignities" can be described in concrete terms, which can then be addressed, such as the inconvenience of not having toilets available, the aversiveness of having to deal with an unfriendly or incompetent government official, etc.. Some interventions require disregarding a number of preferences of those that they are ultimately aimed to help. Having "dignity" be a requirement would make that difficult or impossible.

1
tomwein
Thanks Wolf. The reason I think to speak about dignity as a general phenomenon rather than a series of concrete indignities is that there are so many possible different indignities, which are very context dependent - but there is a sufficient similarity between how those different indignities are experienced to make them worthwhile capturing under one category. Therefore we can offer measures that are appropriate to many situations without having to come up with different specific survey questions for every different possible indignity. To your second point, I would argue for including measures of respect not because we should always and everywhere maximize respect at the expense of other goals, but rather because by measuring it we can make informed judgments about those tradeoffs. My prior is that we would find ways of being more respectful that did not sabotage other goals, but we won't know until we measure.

How are the mentioned first and second objections distinct?

"Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal?"

I fail to get the meaning. Could anybody reword this for me?

"The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be... (read more)

1
trammell
About the two objections: What I'm saying is that, as far as I can tell, the first common longtermist objection to working on x-risk reduction is that it's actually bad, because future human civilization is of negative expected value. The second is that, even if it is good to reduce x-risk, the resources spent doing that could better be used to effect a trajectory change. Perhaps the resources needed to reduce x-risk by (say) 0.001% could instead improve the future by (say) 0.002% conditional on survival. About the decision theory thing: You might think (a) that the act of saving the world will in expectation cause more harm than good, in some context, but also (b) that, upon observing yourself engaged in the x-risk-reduction act, you would learn something about the world which correlates positively with your subjective expectation of the value of the future conditional on survival. In such cases, EDT would recommend the act, but CDT would not. If you're familiar with this decision theory stuff, this is just a generic application of it; there's nothing too profound going on here. About the main thing: It sounds like you're pointing out that stocking bunkers full of canned beans, say, would "save the world" only after most of it has already been bombed to pieces, and in that event the subsequent future couldn't be expected to go so well anyway. This is definitely an example of the point I'm trying to make--it's an extreme case of "the expected value of the future not equaling the expected value of the future conditional on the fact that we marginally averted a given x-risk"--but I don't think it's the most general illustration. What I'm saying is that an attempt to save the world even by preventing it from being bombed to pieces doesn't do as much good as you might think, because your prevention effort only saves the world if it turns that there would have been the nuclear disaster but for your efforts. If it turns out (even assuming that we will never find out) th