All of slicedonions's Comments + Replies

Thanks for writing this up! I'm also a civil servant with a similar length of tenure. To add my two cents for other readers considering a career in the civil service, I've not found particular issues with the democracy and comprehensivity values you raise so far.

On the democracy value - I see a simplified three-staged process for policy work and in my experience if you're working on an impactful policy area it often leaves lots of space for satisfying EA action:

  1. Provide the most thoughtful and careful advice you can given any institutional constraints you f
... (read more)

I’m an EA working on improving animal welfare who interacts quite a lot (in non adversarial contexts) with various big meat companies as part of my job. I think it’s a real oversight that we aren’t encouraging more people (especially those with more consequentialist ethics outlooks) to go and work within industry and help it improve from the inside (this also applies to joining buying teams in supermarkets). Whilst there are real constraints, I think bright people could get promoted quickly and then push for incremental improvements which could improve the welfare of large numbers of farm animals.

Happy to chat more to anyone interested!

This is excellent and is amongst the most thoughtful reflections on impact via policy that I've read, so thanks for writing it up.

I lead a policy team in the UK Civil Service working on a classic EA cause area and lots of the content here about routes to impact chimes with my experience, though obviously the institutional set-up in the UK is different.

From my experience, the points under the heading: 'Big impact wins require taking the time to look for non-obvious opportunities' are incredibly important. There are often super exciting opportunities for imp... (read more)

Hmmmmmm, around 10 days is my best current guess, but I wouldn't be surprised if it should be more like 100 days, I assume crushing involves extreme pain which crating doesn't and so it wouldn't be outrageous to me if it ended up being more like that.

I'd be happy to chat about it if helpful,  I helped found EA for Christians and have spent a bunch of time thinking about different word choices for our name, though you may already be in contact with members of our team.

Our Facebook group is called 'Christians and Effective Altruism' I think this wording allows Christians who don't yet feel comfortable with fully aligning with EA to join and participate which has been useful for us in terms of outreach.

Then in terms of the name for our actual org, I see three options (i) 'EA for Christians', (ii) 'Ch... (read more)

2
BenSchifman
3y
Thank you Alex and Jeremy ! This input is very helpful, and has informed some discussions I've recently had with potential volunteers about what we should call the group.   Alex, I've talked with Caleb from your organization, but I think it would be great to bounce some ideas off of you as well re: this branding issue which it seems you've given a lot of thought to.  For now I'm still going through the list of folks interested in volunteering but once we get a bit further along I'll send you a message!

I'm also a current UK Civil Servant and agree with Kirsten. I don't think doing a Masters in public policy is going to do much to help your application. Obviously, there can be lots of good reasons to do one, but I wouldn't treat as a major factor it helping you get into the UK Civil Service.

Thanks for another excellent post. I continue to get a lot out of your writing, so please keep it coming!

I've always found the idea of 'bindingness' as the most intuitive term I can grasp towards to get at what it's like to be under the purview of a normative ought. You can choose to ignore the ought, or fail to realise that it's there, but regardless it'll be there binding you until you comply, and whilst you don't comply your hypothetical normative life score is jettisoning points and you're slipping down the league table. (Note I'm coming from a normati... (read more)

This was one of my favourite EA Forum posts in a long time, thanks for sharing it!

Externalist realism is my preferred theory. Though I think we'd probably need something like God to exist in order for humans to have epistemic access to any normative facts. I've spent a bit of time reflecting on the "they understand everything there is to understand; they have seen through to the core of reality, and reality, it turns out, is really into helium. But they, unfortunately, aren’t."  style case. Of course it'd be really painful, but I think the appropriate... (read more)

2
Joe_Carlsmith
3y
Glad to hear it :) Re: "my motivational system is broken, I'll try to fix it" as the thing to say as an externalist realist: I think this makes sense as a response. The main thing that seems weird to me is the idea that you're fundamentally "cut off" from seeing what's good about helium, even though there's nothing you don't understand about reality. But it's a weird case to imagine, and the relevant notions of "cut off" and "understanding" are tricky.

As a data point, I found this super useful and would love to see these happen for each episode. Two particular ways I'd benefit: (i) typically there are a few particularly interesting bits in each episode which I found particularly novel/helpful and reading over a post later which re-states those will help them sink in more, (ii) sometimes I skip an episode based on the title but would read over something like this to glean any quick useful things and then maybe listen to the whole thing if it looked particularly useful.

I haven't ever (and doubt I will) re... (read more)

Sometimes I get a bit overwhelmed by just how vast the terrain of doing good is, how many niche questions there are to explore and interventions to test, and how little time/bandwidth I have to figure things out. Then I remember that I'm part of this incredible community of thousands of thoughtful and motivated people who are each beavering away on a small patch of the terrain, turning over the stones, and incrementally building a better view of the territory and therefore what our best bets are. It fills me with real hope and joy that in some important se... (read more)

Thanks for organising, always enjoy filling it in each year! Did questions on religious belief/practice get dropped this year? Or perhaps I just autopiloted through them without noticing. Aware that there are lots of pressures to keep the question count low, but to flag as part of EA for Christians we always found it helpful for understanding that side of the EA community.

7
David_Moss
3y
Thanks Alex! Yeh, due to the space constraints you mention, we're planning to run some questions (which mostly stay very similar across multiple years) only every other year. The same thing happened to politics and diet. This is, of course, not ideal, since it means that we can't include these variables in our other models or examine, for example, differences in satisfaction with EA among people with different religious stances or politics, every other survey. Thanks for explicitly mentioning that you found these variables useful. That should help inform discussion in future years about what questions to include.
Human utility functions seem clearly inconsistent with infinite utility.

If you're not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you'll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?

I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate a
... (read more)
6
PeterMcCluskey
5y
I think it's more appropriate to use Bostrom's Moral Parliament to deal with conflicting moral theories. Your approach might be right if the theories you're comparing used the same concept of utility, and merely disagreed about what people would experience. But I expect that the concept of utility which best matches human interests will say that "infinite utility" doesn't make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same. Similarly, I use a dealist approach to morality. If you show me an argument that there's an objective morality which requires me to increase the probability of infinite utility, I'll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom's parliament than like your approach.

I've found the conversation productive, thanks for taking the time to discuss.

My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.

Impartial reasons would be reasons that would 'count' even if we were some sort of floating consciousness observing the universe without any specific personal interests.

I probably don't have any more intuitive explanations of impartial reasons than that, so sorry if it doesn't convey my meaning!

4
Rohin Shah
5y
My math-intuition says "that's still not well-defined, such reasons may not exist". To which you might say "Well, there's some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist." My intuition says "But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I've considered so far. They'd all have some small probability of existing, and I could direct my actions any which way in this manner!" (This is what I was getting at with the "meta-oughtness" rule I was talking about earlier.) To which you might say "But moral reasons aren't some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn't just invent new categories and put stock into them, but moral reasons hardly seem like a new category." And my response would be "I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren't likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of "moral reasons" probably arose from this sort of process, I don't think that "moral reasons" is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I'm more likely to reject the intuition that i
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.

I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn't want to argue that (2) is true because (1) is true. I'm not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven't come across any defeaters for that intuition.

A secondary claim is that if it does not satisfy property 3, then you can never i
... (read more)
1
Rohin Shah
5y
Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I'll just try to clarify how I respond to imagining that I held the position you do. From my perspective, the phrase "moral reason" has both the connotation that it is external to humans and that it trumps all other reasons, and that's why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language. My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.
There seems to be something that makes you think that moral reasons should trump prudential reasons.

The reason I have is in my original post. Namely I have a strong intuition that it would be very odd to say that someone who had done what there was most moral reason to do had failed to do what there was most 'all things considered' reason for them to do.

If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!

On you... (read more)

2
Rohin Shah
5y
I did mean for you to replace X with a phrase, not a number. Your intuition involves the complex phrase "moral reason" for which I could imagine multiple different interpretations. I'm trying to figure out which interpretation is correct. Here are some different properties that "moral reason" could have: 1. It is independent of human desires and goals. 2. It trumps all other reasons for action. 3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning. My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that. A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3. Here are some models of how you might be thinking about moral reasons: a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2. b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1. c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2. My response to a) and b) are of the form "That inference seems wrong to me and I want to delve further." My response to c) is "Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange."

Yep that seems right, though you might want more than one believer in each in case one of the assigned people messes it up somehow.

Thanks @trammell.

Will read up on stochastic dominance, will presumably bring me back to my mirco days thinking about lotteries...

Note that I think there may be a way of dealing with it whilst staying in the expected utility framework. Where we ignore undefined expected utilities as they are not action guiding. Instead we focus on the part of our probability spaces where they don't emerge. In this case I suggest we should only focus on worlds in which you can't have both negative and positive infinities. We'd assume in our analysis that only ... (read more)

1
trammell
5y
I was just saying that, thankfully, I don’t think our decision problem is wrecked by the negative infinity cases, or the cases in which there are infinite amounts of positive and negative value. If it were, though, then okay—I’m not sure what the right response would be, but your approach of excluding everything from analysis but the “positive infinity only” cases (and not letting multiple infinities count for more) seems as reasonable as any, I suppose. Within that framework, sure, having a few thousand believers in each religion would be better than having none. (It’s also better than having everyone believe in whichever religion seems most likely, of course.) I was just taking issue with “it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances”.

Agreed that unguided evolution might give us generally reliable cognitive faculties. However, there is no obvious story we can give for how our cognitive faculties would have access to moral facts (if they exist). Moral facts don't interact with the world in a way that gives humans the way to ascertain them. They're not like visual data where reflected/emitted photons can be picked up by our eyes. So it's not clear how information about them would enter into our cognitive system?

I'd be interested if you have thoughts on a mechanism whereby information about moral facts could enter our cognitive systems?

Thanks for the really thoughtful engagement.

I don't know how to argue against this, you seem to be taking it as axiomatic.

I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either. It doesn't seem obvious to me that the desire fact is much more likely to create normative reasons than the table fact. If we don't think the t... (read more)

2
Rohin Shah
5y
There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I'm trying to do is narrow down on what that is. In most of my comments, I've thought I've identified it, and so I argued against it, but it seems I'm constantly wrong about that. So let me try and explicitly figure it out: How much would you agree with each of these statements: * If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say. * If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire. * If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire. * If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?) I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule "binding oughtness trumps regular oughtness", which I interpreted as "moral reasons trump prudential reasons", but it seems I misunderstood what you meant there, since you mean "binding oughtness" to apply both to moral and prudential reasons, so ignore that argument. This makes me mildly worried that you aren't able to imagine the worldview where prudential reasons exist. Though I have to admit I'm confused why under this view there are any normative reasons for action -- surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion. (Btw, random note, I suspect tha

So my claim I'm trying to defend here is not that we should be willing to hand over our wallet in Pascal's mugging cases.

Instead its a conditional claim that if you are the type of person who finds the Mugger's argument compelling then then the logic which leads you to find it compelling actually gives you reason not to hand over your wallet as there are more plausible ways of attempting to elicit the infinite utility than dealing with the mugger.

1
Rohin Shah
5y
I see, that makes sense, and I agree with it.

Thanks for the interesting post. One thought I have is developed below. Apologies that it only tangentially relates to your argument, but I figured that you might have something interesting to say.

Ignoring the possibility of infinite negative utilities. All possible actions seem to have infinite positive utility in expectation. For all actions have a non-zero chance of resulting in infinite positive utility. For it seems that for any action there's a very small chance that it results in me getting an infinite bliss pill, or to go Pascal's route t... (read more)

1
Rohin Shah
5y
I and most other people (I'm pretty sure) wouldn't chase the highest probability of infinite utility, since most of those scenarios are also highly implausible and feel very similar to Pascal's mugging.
I think your argument is that we should ignore worlds without a binding oughtness.

Agreed, I'm just using 'binding oughtness' here as a (hopefully) more intuitive way of fleshing out what I mean by 'normative reason for action'.

But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons

So I agree that if there are no normative reasons/'binding oughtness' then you would still have your mere desires. However these jus... (read more)

2
Rohin Shah
5y
I don't know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you're actions aren't guided by some overarching rule granted authority by virtue of being morality. But I suspect this isn't going to convince you. Can we simply assume that prudential reasons exist and figure out the implications? Thanks, I think I've got it now. (Also it seems to be in your appendix, not sure how I missed that before.) I know, and I think in the very next paragraph I try to capture your view, and I'm fairly confident I got it right based on your comment. This seems tautological when you define morality as "binding oughtness" and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or "binding meta-oughtness" that trumps "binding oughtness"? For example, "when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons". It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn't sound as compelling, but I claim that is because we don't have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.

Yep what you suggest I think isn't far from the fact. Though note I'm open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.

I just think this question of 'what grounds this moral experience' is the right one to ask. On the way you've articulated it I just think your mere feelings about behaviours don't amount to normative reasons for action, unless you can explain how these normative properties enter the picture.

Note that normative reasons are weir... (read more)

1
Ben Pace
5y
<unfinished>

This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.

To capture my updated view, it'd be something like this: for those who have what I'd consider a 'rational' probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a 'rational' probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent's decision space should be governed by what reasons the agent would face if theism were true.

Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit.

I wholeheartedly agree with this. However there is no structural reason to think that most possible sets of moral facts would have evolutionary benefit. You outline one option where there would be a connection, however that this is the story behind morality would be surprisingly lucky on our part.

We would also need to acknowledge the possibility that evolution has just tricked us into thinking that common sense morality is correct when really moral facts ... (read more)

My view is broadly that if reasons for action exist which create this sort of binding 'oughtness' in favour of you carrying out some particular thing, then there must be some story about why this binding oughtness applies to some things and not others.

It's not clear to me that mere human desires/goals are going to generate this special property that you now ought to do something. We don't think that the fact that 'my table has four legs results' in itself generates reasons for anyone to do anything, so why should the fact that... (read more)

2
Rohin Shah
5y
With that terminology, I think your argument is that we should ignore worlds without a binding oughtness. But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons, but I don't really understand that term -- I thought it was synonymous with 'instrumental' reasons, but taking actions for your own desires and goals is certainly not 'instrumental'. So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals. You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that's true. Like, I could also argue that actually, you should follow the binding meta-oughtness rule, which tells you how to derive ought statements from is statements, and that should always trump any particular oughtness rule, so you should ignore all of those and follow the most likely meta-oughtness rule. But this seems pretty fallacious. What's the difference?

This is my response to your meta-level response.

I don't trust the intellectual tradition of this argumentative style.

It's not obvious that anyone's asking you to trust anything? Surely those offering arguments are just asking you to assess an argument on its merits, rather than by the family of thinkers the argument emerges from?

But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

I'm reasonably involved in the apologeti... (read more)

Thanks Ben! I'll try and comment on your object level response in this comment and your meta level response in another.

Alas I'm not sure I properly track the full extent of your argument, but I'll try and focus on the parts that are trackable to me. So apologies if I'm failing to understand the force of your argument because I am missing a crucial part.

I see the crux of our disagreement summed up here:

My model of the person who believes the OP wants to say
"Yes, but just because you can tell a story about how evolution would give yo
... (read more)
4
Ben Pace
5y
*nods* I think what I wrote there wasn't very clear. To restate my general point: I'm suggesting that your general frame contains a weird inversion. You're supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others' behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust. In the first one, you would be surprised to find out we've randomly been selected to have the right morality by evolution. In the second, it's almost definitional that evolution has produced us to have the right morality. There's still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about. Does the former seem like an accurate description of the way you're proposing to think about morality?

So I think that's broadly right but it's a much narrower argument than Plantinga's.

Plantinga's argument defends that we can't trust our beliefs *in general* if unguided evolution is the case. The argument I defend here is making a narrower claim that it's unlikely that we can trust our normative beliefs if unguided evolution is the case.

I've never heard a plausible account of someone solving the is-ought problem, I'd love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.

I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can't be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I'd be interested if people here defend a sophisticated view of subjectivism that doesn't have unpalatable results.

4
MaxDalton
5y
To link this to JP's other point, you might be right that subjectivism is implausible, but it's hard to tell how low a credence to give it. If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie). I'm pretty uncertain about my credence in each of those views though.

Good point - this sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously won't go through.

I guess from my assessment of the philosophy of religion literature it doesn't seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.

Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.