What makes you say rejecting person-affecting views has uncomfortable (for progressive) and environmental ethics, out of curiosity? I would have thought the opposite: person-affecting views struggle not to treat environmental collapse as morally neutral if it leads to a different set of people existing than would have otherwise.
I've strong upvoted Ben's points, and would add a couple of concerns:* I don't know how in any particular situation one would usefully separate the object-level from the general principle. What heuristic would I follow to judge how far to defer to experts on banana growers in Honduras on the subject of banana-related politics?* The less pure a science gets (using https://xkcd.com/435/ as a guide), the less we should be inclined to trust its authorities, but the less we should also be inclined to trust our own judgement - the relevant factors grow at a huge rate
So sticking to the object level and the eg of minimum wage, I would not update on a study that much, but strong agree with Ben that 98% is far too confident, since when you say 'the only theoretical reason', you presumably mean 'as determined by other social science theory'.
(In this particular case, it seems like you're conflating the (simple and intuitive to me as well fwiw) individual effect of having to pay a higher wage reducing the desirability of hiring someone with the much more complex and much less intuitive claim that higher wages in general would reduce number of jobs in general - which is the sort of distinction that an expert in the field seems more likely to be able to draw.)
So my instinct is that Bayesians should only strongly disagree with experts in particular cases where they can link their disagreement to particular claims the experts have made that seem demonstrably wrong on Bayesian lights.
There are some fundamental problems facing moral uncertainty that I haven't seen its proponents even refer to, let alone refute:
One issue I feel the EA community has badly neglected is the probability given various (including modest) civilizational backslide scenarios of us still being able to (and *actually*) developing the economies of scale needed to become an interstellar species.
To give a single example, a runaway Kessler effect could make putting anything in orbit basically impossible unless governments overcome the global tragedy of the commons and mount an extremely expensive mission to remove enough debris to regain effective orbital access - in a world where we've lost satellite technology and everything that depends on it.
EA so far seem to have treated 'humanity doesn't go extinct' in scenarios like this as equivalent to 'humanity reaches its interstellar potential', which seems very dangerous to me - intuitively, it feels like there's at least a 1% chance that we wouldn't ever solve such a problem in practice, even if civilisation lasted for millennia afterwards. If so, then we should be treating it as (at least) 1/100th of an existential catastrophe - and a couple of orders of magnitude doesn't seem like that big a deal especially if there are many more such scenarios than there are extinction-causing ones.Do you have any thoughts on how to model this question in a generalisable way that it could give a heuristic for non-literal-extinction GCRs? Or do you think one would need to research specific GCRs to answer it for each of them?
What do you make of Ben Garfinkel's work on scepticism towards AI's capacity being separable from its goals/his broader skepticism of brain in a box scenarios?
Can you spell both of these points out for me? Maybe I'm looking in the wrong place, but I don't see anything in that tag description that recommends criteria for cause candidates.
As for Scott's post, I don't see anything more than a superficial analogy. His argument is something like 'the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea'. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?
Write a post on which aspect? You mean basically fleshing out the whole comment?
One other cause-enabler I'd love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it's still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.
It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won't invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.
Then I would suggest being more clear about what it's comprehensive of, ie by having clear criteria for inclusion.
I would like to see more about 'minor' GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1. We can see this attitude in this very post,
I don't think this is remotely justified. The arguments I've seen are generally of the form 'we'll still be able to salvage enough resources to theoretically recreate any given technology', which doesn't mean we can get anywhere near the economies of scale needed to create global industry on today's scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we're a reliably spacefaring civilisation, well beyond today's technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life's potential).
If the chance of recovery from any given backslide is 99%, then that's only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn't that big a deal (Toby Ord's The Precipice puts an order of magnitude or two between the probability of many of the existential risks we're typically concerned with).
Things I would like to see more discussion of in this area: