Sorted by New


10 Habits I recommend (2020)

For zoom, when scheduling a meeting and marking it as “recurring”, the link should stay valid indefinitely.

How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks?

Also, parts of their logical induction paper were published/presented at TARK-2017, which is a reasonable fit for the paper, and a respectable though not a top conference.

Rob Wiblin's top EconTalk episode recommendations

Not sure about iTunes/iOS; probably I'd need to submit the podcast to Apple for approval which I don't have enough permission to do :) Maybe there are non-Apple-protected apps? Or switch to Android.

Under certain circumstances, having moral uncertainty over theories that are purely ordinal may lead to the recommendation to split. Example: Suppose there are three charities A,B,C, and four options: donating 100% to one of A, B, C, or splitting the money equally between them (which we will call S). Let's ignore other ways of splitting. Suppose you have equal credence of 33% in three different theories:

1: A > S > B > C

2: B > S > C > A

3: C > S > A > B

Given the ranking over charities, it is rational in something like a von Neumann-Morgenstern sense to rank S second. But with these theories and these credences, one can see that S is the Condorcet winner and it is also the unique Borda winner, so that S would be uniquely recommended by essentially all voting rules, including Borda, the system favoured by Will MacAskill. In this example, contrary to the example in the OP, option S is not Pareto-dominated by another option, so that the unanimity principle does not bite.

This example crucially depends on only having ordinal information available, since with cardinal information (and expected value maximisation) we would never uniquely recommend splitting, as Tom notes, and so I don't think the argument in favour of splitting from moral uncertainty is particularly strong or robust.