In this post, I spell out how I think we ought to approach decision making under uncertainty and argue that the most plausible conclusion is that we ought to act as if theism is true. This seems relevant to the EA community as if this is the case it might impact our cause prioritisation decisions.
Normative realism is the view that there are reasons for choosing to carry out at least some actions. If normative realism is false, then normative anti-realism is true. On this view, there are no reasons for or against taking any action. If normative realism is false then all actions are equally choice-worthy, for they all have both no reasons for and no reasons against them.
Suppose Tina is working out whether she has more reason to go on holiday or to donate the money to an effective charity.
Tina knows that if normative anti-realism is true then there is no fact of the matter about which she has more reason to do, for there are no reasons either way. It seems to make sense for Tina to ignore the part of her probability space taken up by worlds in which normative realism is false and instead just focus on the part of her probability space taken up by worlds in which normative realism is true. After-all, in worlds with normative anti-realism, there isn’t any reason to act either way. So it would be surprising if the possibility of being in one of these worlds were relevant to her decision.
It also seems appropriate for Tina to ignore the part of her probability space taken up by worlds in which she would not have epistemic access to any potential normative facts. Suppose that World 26 is a world in which normative realism held but that agents had no access to the reasons for action which existed. Considering World 26 is going to provide no guidance to Tina on whether to go on holiday or donate the money. As such it seems right for Tina to discount such worlds from her decision procedure.
If the above is true then Tina should only consider worlds in which normative realism is true and there is a plausible mechanism that she would know the normative truths.
It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)
To see this, imagine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.
There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.
The first is that there is some sort of built in teleology to the universe which results in certain ends being brought about. John Leslie’s axiarchism is one example of this, where what exists, exists because it is good. This might plausibly bring about humans with correct moral beliefs as knowing correct moral beliefs might itself be intrinsically good. However many, myself included, will find this sort of metaphysics quite unlikely. Separately the possibility of this theory is unlikely to count against my argument as it's also likely to be a metaphysics in which God exists, as God’s existence itself is typically considered to be a good and so would also be brought about.
The other apparent option is that evolution was guided by some sort of designer. The most likely form of this directing power stems from the existence of God or Gods. If an omniscient God exists, then God would know all moral facts and had he so desired it, could easily engineer it so that humans had reliable moral beliefs.
Another design option is that we were brought about by a simulator, simulators would also have the power to engineer the moral beliefs of humans. However it’s not clear how these simulators would have reliable access to the relevant moral facts themselves in order to correctly program them into us as humans. The question we are asking of how we could trust our moral views on unguided evolution could equally be asked of our simulators, and their simulators in turn if the chain of simulation continues. As a result, it’s not clear that considering worlds in which we are simulated is going to be decision relevant by the second of our two criteria, unless our simulators had their moral beliefs reliably programmed by God.
Given this, the only worlds in which humans end up with reliable moral beliefs seem to be worlds in which God exists. As such, according to our criteria above, when deciding how we ought to act we need only consider possible worlds in which God exists. Therefore, when Tina is choosing between the two options she ought to ask herself, what would be the option she would have most reason to choose if she existed in a theistic world.
To complete her analysis of what action to take, she should consider, of the possible theisms: (i) how likely each is, (ii) how likely each is to co-exist with normative realism, (iii) how likely it is that the God(s) of this theism would give her reliable access to moral facts and (iv) how choice-worthy are the two actions on each theistic possibility.
Appendix on prudential reasons
Other than moral reasons, the other category of reasons commonly discussed are prudential reasons. These are reasons of self-interest. For example, one may have strong moral reasons to jump on a grenade to save the lives of one’s comrades but some think it’s likely that one has a prudential reason not to sacrifice one’s life in this way.
If prudential reasons exist then it seems more plausible that humans would know about them compared to moral reasons: prudential reasons pertain to what it is in my interests to do, and I have at least some access to myself. Still, it’s not guaranteed that we have access to prudential reasons. If prudential reasons exist, a baby presumably has a prudential reason to be inoculated even if it has no access to this fact at the time of the inoculation.
It seems unlikely to me that prudential reasons exist in many worlds in which normative realism holds. However, even if they do exist, we would need to consider how to weigh prudential and moral reasons, especially when moral reasons pull in one way and prudential reasons pull in the other.
It’s tempting to say that it will just depend on the comparative strengths of the moral and prudential reasons in any given case. However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do. As such, I prefer a view where moral reasons have a ‘lexical’ priority over prudential reasons, which is to say that when choosing between two actions, we should do whichever action has most moral reason for it and only consider the prudential reasons if both actions are equally morally choice-worthy.
Still, my previous analysis would need to be tempered by any uncertainty surrounding the possible existence of prudential reasons, for unguided evolution might plausibly give a human access to them. If there is also uncertainty that moral reasons do not always dominate prudential reasons then the possibility of prudential reasons in non-God worlds will need to be factored into one’s decision procedure.
It's been many years (about 6?) since I've read an argument like this, so, y'know, you win on nostalgia. I also notice that my 12-year old self would've been really excited to be in a position to write a response to this, and given that I've never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I'm going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.
Also, sorry it's so long, I didn't have the time to make it short.
I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here's what seems to me to be a key crux of this arg (I've bolded the key sentences):
Object-level response: this is confused about how values come into existence.
The things I care about aren't written into the fabric of the universe. There is no clause in the laws of physics to distinguish what's good and bad. I am a human being with desires and goals, and those are things I *actually care about*.
For any 'moral' law handed to me on high, I can always ask why I should care about it. But when I actually care, there's no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking "Yeah, but why should I care about this?" These sorts of things I'm happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.
(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright's "The Moral Animal" was really great, and Joshua Greene's "Moral Tribes" is a slightly more abstract version that also contains some key insights about how morality actually works.)
My model of the person who believes the OP wants to say
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there's something else I should care about instead, the world just makes sense now.
To point to an example of the process turning out the other way: there's a been a variety of updates I've made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:
These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it's what I reflectively care about.
I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn't in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn't how this works.
Meta-level response: I don't trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I'm actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.
Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.
Here's some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):
My impression is that it's fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:
My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it's not even worth exploring the particular mistakes made once you've reached this conclusion).
This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it's actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don't think it's worth spending time engaging with intellectually.
If you find yourself confused by a theologian's argument, I don't mean to say you should ignore that and pretend that you're not confused. That's a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don't expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.
---
To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn't respond to further comments in this thread any less than usual :)
<unfinished>