I'd be happy to chat about it if helpful, I helped found EA for Christians and have spent a bunch of time thinking about different word choices for our name, though you may already be in contact with members of our team.Our Facebook group is called 'Christians and Effective Altruism' I think this wording allows Christians who don't yet feel comfortable with fully aligning with EA to join and participate which has been useful for us in terms of outreach.Then in terms of the name for our actual org, I see three options (i) 'EA for Christians', (ii) 'Christians in EA' and (iii) some wording like our Facebook group name using 'and'.Whilst (ii) feels the cleanest, as noted above it reads as an affinity group without the outreach edge which is a core part of our org. (iii) is also inoffensive but sounds like it lacks a mission which I think can also be unattractive when doing outreach. I like the fact that in (i) Christianity is spotlighted, which works with the way Christians are encouraged to think about their Christian identity being the most central to them. The downside is that it risks sounding like it's EA being used in support of Christians, which obviously isn't our goal, rather the use of for is meant to imply that EA provides an invaluable toolkit to aid Christians in their God-given mission to serve others.
I'm also a current UK Civil Servant and agree with Kirsten. I don't think doing a Masters in public policy is going to do much to help your application. Obviously, there can be lots of good reasons to do one, but I wouldn't treat as a major factor it helping you get into the UK Civil Service.
Thanks for another excellent post. I continue to get a lot out of your writing, so please keep it coming!I've always found the idea of 'bindingness' as the most intuitive term I can grasp towards to get at what it's like to be under the purview of a normative ought. You can choose to ignore the ought, or fail to realise that it's there, but regardless it'll be there binding you until you comply, and whilst you don't comply your hypothetical normative life score is jettisoning points and you're slipping down the league table. (Note I'm coming from a normative realist perspective)
Ultimately I think my view is that what one ought to do is just that which there is all-things-considered most reason to do, and I've always had the intuition that what it means to have a reason to do something is primitive and not amenable to deeper analysis. Interested in whether you think that having a normative reason is a primitive concept / any useful reading on the topic you might know on the topic?
This was one of my favourite EA Forum posts in a long time, thanks for sharing it!Externalist realism is my preferred theory. Though I think we'd probably need something like God to exist in order for humans to have epistemic access to any normative facts. I've spent a bit of time reflecting on the "they understand everything there is to understand; they have seen through to the core of reality, and reality, it turns out, is really into helium. But they, unfortunately, aren’t." style case. Of course it'd be really painful, but I think the appropriate response would be to understand the issue as one of human motivational brokenness. Something has gone wrong in my wiring which means my motivations are not properly functioning as they are out of kilter with what there is all-things-considered most reason to do, namely promote helium. That doesn't mean that I'm to blame for this mismatch. But I'd hope that I'd then push this acknowledgement of my motivational brokenness into a course of actions to see if I can get my motivations to fall in line with the normative truth.
On the hell case (which feels personally relevant as an active Christian) I think I'd take a lot of solace during my internment that this is just what there is all-things-considered most reason to happen. If my dispositions/motivations fail to fall in line, then as above they are failing to properly function and I think/hope that acknowledging this would take some of the edge off the dissonance of not being able to understand why this is a just punishment.
As a data point, I found this super useful and would love to see these happen for each episode. Two particular ways I'd benefit: (i) typically there are a few particularly interesting bits in each episode which I found particularly novel/helpful and reading over a post later which re-states those will help them sink in more, (ii) sometimes I skip an episode based on the title but would read over something like this to glean any quick useful things and then maybe listen to the whole thing if it looked particularly useful.I haven't ever (and doubt I will) read over a full transcript, so posting those wouldn't do the same thing. Also, putting the particularly interesting insights as comments allows upvoting to triage the insights that are most useful for the community.
Sometimes I get a bit overwhelmed by just how vast the terrain of doing good is, how many niche questions there are to explore and interventions to test, and how little time/bandwidth I have to figure things out. Then I remember that I'm part of this incredible community of thousands of thoughtful and motivated people who are each beavering away on a small patch of the terrain, turning over the stones, and incrementally building a better view of the territory and therefore what our best bets are. It fills me with real hope and joy that in some important sense the graft that other people are putting in psychologically frees me up to double down on my small patch of activity with even more vigour knowing that other people will find gold veins in other parts of the terrain that I miss.
Thanks for organising, always enjoy filling it in each year! Did questions on religious belief/practice get dropped this year? Or perhaps I just autopiloted through them without noticing. Aware that there are lots of pressures to keep the question count low, but to flag as part of EA for Christians we always found it helpful for understanding that side of the EA community.
Human utility functions seem clearly inconsistent with infinite utility.
If you're not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you'll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?
I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate as infinite utility.
So maybe from the self interest perspective you discount future experiences. However from a moral perspective that doesn't seem too relevant, these are experiences and they count the same so if there are an infinite number of positive experiences then they would sum to an infinite utility. Also note that even if your argument counted in the moral realm too then unless you're 100% sure it does then my reply to your other point will work here too?
I've found the conversation productive, thanks for taking the time to discuss.
My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.
Impartial reasons would be reasons that would 'count' even if we were some sort of floating consciousness observing the universe without any specific personal interests.
I probably don't have any more intuitive explanations of impartial reasons than that, so sorry if it doesn't convey my meaning!
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.
I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn't want to argue that (2) is true because (1) is true. I'm not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven't come across any defeaters for that intuition.
A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3.
This seems false to me. It's typically thought that an omniscient being (by definition) could know these non-natural irreducibly normative facts. All we'd need is some mechanism that connects humans with them. One mechanism as I discuss in my post is that God puts them in the brains of humans. We might wonder how God could know the non-natural facts, one explanation might be that God is the truthmaker for them, if he is then it seems plausible he would know them.
On your three options (a) seems closest to what I believe. Note my preferred definitions would be:
'What I have most prudential reason to do is what benefits me most (benefits in an objective rather than subjective sense).'
'What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe).'
To be clear it's very plausible to me that what 'benefits you most' is not necessarily what you desire most as seen by Parfit's discussion of future Tuesday indifference mentioned above. That's why I use the objective caveat.