emmajcurran

Postdoctoral Associate @ Rutgers University - New Brunswick
31 karmaJoined Working (0-5 years)emmajcurran.co.uk

Bio

Philosopher working in non-consequentialist ethics, metaphysics, and sometimes longtermism. 

Comments
5

Super interesting, Elliott (though, of course, you must be wrong!) 

Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making  under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us. 

The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/subjective permissibility. 

 

Sorry I edited this as I had another thought.

"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death." I think, perhaps, this line is infelicitous. 

The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we don't intervene). However, these complaints need to be discounted by the improbability of their occurrence. 

To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1/100 billion chance some random bystander would die. If we don't discount ex-post, then ex-post we are comparing a migraine to death - and we'd be counterintuitively advised not to alleviate the migraine.

Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by  99.99995%. 

I hope this clears up the confusion, and maybe helps with your concerns about instability? 

Great to see this out there - a very useful piece of work! 

I actually have another manuscript on an ex-ante/ex-post fairness argument against longterm interventions. Could I send it to you sometime? Would love to hear your thoughts.