I’m an EA working on improving animal welfare who interacts quite a lot (in non adversarial contexts) with various big meat companies as part of my job. I think it’s a real oversight that we aren’t encouraging more people (especially those with more consequentialist ethics outlooks) to go and work within industry and help it improve from the inside (this also applies to joining buying teams in supermarkets). Whilst there are real constraints, I think bright people could get promoted quickly and then push for incremental improvements which could improve the welfare of large numbers of farm animals.
Happy to chat more to anyone interested!
This is excellent and is amongst the most thoughtful reflections on impact via policy that I've read, so thanks for writing it up.
I lead a policy team in the UK Civil Service working on a classic EA cause area and lots of the content here about routes to impact chimes with my experience, though obviously the institutional set-up in the UK is different.
From my experience, the points under the heading: 'Big impact wins require taking the time to look for non-obvious opportunities' are incredibly important. There are often super exciting opportunities for imp...
Hmmmmmm, around 10 days is my best current guess, but I wouldn't be surprised if it should be more like 100 days, I assume crushing involves extreme pain which crating doesn't and so it wouldn't be outrageous to me if it ended up being more like that.
I'd be happy to chat about it if helpful, I helped found EA for Christians and have spent a bunch of time thinking about different word choices for our name, though you may already be in contact with members of our team.
Our Facebook group is called 'Christians and Effective Altruism' I think this wording allows Christians who don't yet feel comfortable with fully aligning with EA to join and participate which has been useful for us in terms of outreach.
Then in terms of the name for our actual org, I see three options (i) 'EA for Christians', (ii) 'Ch...
I'm also a current UK Civil Servant and agree with Kirsten. I don't think doing a Masters in public policy is going to do much to help your application. Obviously, there can be lots of good reasons to do one, but I wouldn't treat as a major factor it helping you get into the UK Civil Service.
Thanks for another excellent post. I continue to get a lot out of your writing, so please keep it coming!
I've always found the idea of 'bindingness' as the most intuitive term I can grasp towards to get at what it's like to be under the purview of a normative ought. You can choose to ignore the ought, or fail to realise that it's there, but regardless it'll be there binding you until you comply, and whilst you don't comply your hypothetical normative life score is jettisoning points and you're slipping down the league table. (Note I'm coming from a normati...
This was one of my favourite EA Forum posts in a long time, thanks for sharing it!
Externalist realism is my preferred theory. Though I think we'd probably need something like God to exist in order for humans to have epistemic access to any normative facts. I've spent a bit of time reflecting on the "they understand everything there is to understand; they have seen through to the core of reality, and reality, it turns out, is really into helium. But they, unfortunately, aren’t." style case. Of course it'd be really painful, but I think the appropriate...
As a data point, I found this super useful and would love to see these happen for each episode. Two particular ways I'd benefit: (i) typically there are a few particularly interesting bits in each episode which I found particularly novel/helpful and reading over a post later which re-states those will help them sink in more, (ii) sometimes I skip an episode based on the title but would read over something like this to glean any quick useful things and then maybe listen to the whole thing if it looked particularly useful.
I haven't ever (and doubt I will) re...
Sometimes I get a bit overwhelmed by just how vast the terrain of doing good is, how many niche questions there are to explore and interventions to test, and how little time/bandwidth I have to figure things out. Then I remember that I'm part of this incredible community of thousands of thoughtful and motivated people who are each beavering away on a small patch of the terrain, turning over the stones, and incrementally building a better view of the territory and therefore what our best bets are. It fills me with real hope and joy that in some important se...
Thanks for organising, always enjoy filling it in each year! Did questions on religious belief/practice get dropped this year? Or perhaps I just autopiloted through them without noticing. Aware that there are lots of pressures to keep the question count low, but to flag as part of EA for Christians we always found it helpful for understanding that side of the EA community.
Human utility functions seem clearly inconsistent with infinite utility.
If you're not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you'll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?
I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate a...
I've found the conversation productive, thanks for taking the time to discuss.
My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.
Impartial reasons would be reasons that would 'count' even if we were some sort of floating consciousness observing the universe without any specific personal interests.
I probably don't have any more intuitive explanations of impartial reasons than that, so sorry if it doesn't convey my meaning!
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.
I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn't want to argue that (2) is true because (1) is true. I'm not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven't come across any defeaters for that intuition.
A secondary claim is that if it does not satisfy property 3, then you can never i...
There seems to be something that makes you think that moral reasons should trump prudential reasons.
The reason I have is in my original post. Namely I have a strong intuition that it would be very odd to say that someone who had done what there was most moral reason to do had failed to do what there was most 'all things considered' reason for them to do.
If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!
On you...
Yep that seems right, though you might want more than one believer in each in case one of the assigned people messes it up somehow.
Thanks @trammell.
Will read up on stochastic dominance, will presumably bring me back to my mirco days thinking about lotteries...
Note that I think there may be a way of dealing with it whilst staying in the expected utility framework. Where we ignore undefined expected utilities as they are not action guiding. Instead we focus on the part of our probability spaces where they don't emerge. In this case I suggest we should only focus on worlds in which you can't have both negative and positive infinities. We'd assume in our analysis that only ...
Agreed that unguided evolution might give us generally reliable cognitive faculties. However, there is no obvious story we can give for how our cognitive faculties would have access to moral facts (if they exist). Moral facts don't interact with the world in a way that gives humans the way to ascertain them. They're not like visual data where reflected/emitted photons can be picked up by our eyes. So it's not clear how information about them would enter into our cognitive system?
I'd be interested if you have thoughts on a mechanism whereby information about moral facts could enter our cognitive systems?
Thanks for the really thoughtful engagement.
I don't know how to argue against this, you seem to be taking it as axiomatic.
I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either. It doesn't seem obvious to me that the desire fact is much more likely to create normative reasons than the table fact. If we don't think the t...
So my claim I'm trying to defend here is not that we should be willing to hand over our wallet in Pascal's mugging cases.
Instead its a conditional claim that if you are the type of person who finds the Mugger's argument compelling then then the logic which leads you to find it compelling actually gives you reason not to hand over your wallet as there are more plausible ways of attempting to elicit the infinite utility than dealing with the mugger.
Thanks for the interesting post. One thought I have is developed below. Apologies that it only tangentially relates to your argument, but I figured that you might have something interesting to say.
Ignoring the possibility of infinite negative utilities. All possible actions seem to have infinite positive utility in expectation. For all actions have a non-zero chance of resulting in infinite positive utility. For it seems that for any action there's a very small chance that it results in me getting an infinite bliss pill, or to go Pascal's route t...
I think your argument is that we should ignore worlds without a binding oughtness.
Agreed, I'm just using 'binding oughtness' here as a (hopefully) more intuitive way of fleshing out what I mean by 'normative reason for action'.
But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons
So I agree that if there are no normative reasons/'binding oughtness' then you would still have your mere desires. However these jus...
Yep what you suggest I think isn't far from the fact. Though note I'm open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of 'what grounds this moral experience' is the right one to ask. On the way you've articulated it I just think your mere feelings about behaviours don't amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weir...
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it'd be something like this: for those who have what I'd consider a 'rational' probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a 'rational' probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent's decision space should be governed by what reasons the agent would face if theism were true.
Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit.
I wholeheartedly agree with this. However there is no structural reason to think that most possible sets of moral facts would have evolutionary benefit. You outline one option where there would be a connection, however that this is the story behind morality would be surprisingly lucky on our part.
We would also need to acknowledge the possibility that evolution has just tricked us into thinking that common sense morality is correct when really moral facts ...
My view is broadly that if reasons for action exist which create this sort of binding 'oughtness' in favour of you carrying out some particular thing, then there must be some story about why this binding oughtness applies to some things and not others.
It's not clear to me that mere human desires/goals are going to generate this special property that you now ought to do something. We don't think that the fact that 'my table has four legs results' in itself generates reasons for anyone to do anything, so why should the fact that...
This is my response to your meta-level response.
I don't trust the intellectual tradition of this argumentative style.
It's not obvious that anyone's asking you to trust anything? Surely those offering arguments are just asking you to assess an argument on its merits, rather than by the family of thinkers the argument emerges from?
But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.
I'm reasonably involved in the apologeti...
Thanks Ben! I'll try and comment on your object level response in this comment and your meta level response in another.
Alas I'm not sure I properly track the full extent of your argument, but I'll try and focus on the parts that are trackable to me. So apologies if I'm failing to understand the force of your argument because I am missing a crucial part.
I see the crux of our disagreement summed up here:
My model of the person who believes the OP wants to say
"Yes, but just because you can tell a story about how evolution would give yo...
So I think that's broadly right but it's a much narrower argument than Plantinga's.
Plantinga's argument defends that we can't trust our beliefs *in general* if unguided evolution is the case. The argument I defend here is making a narrower claim that it's unlikely that we can trust our normative beliefs if unguided evolution is the case.
I've never heard a plausible account of someone solving the is-ought problem, I'd love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can't be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I'd be interested if people here defend a sophisticated view of subjectivism that doesn't have unpalatable results.
Good point - this sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously won't go through.
I guess from my assessment of the philosophy of religion literature it doesn't seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.
Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.
Thanks for writing this up! I'm also a civil servant with a similar length of tenure. To add my two cents for other readers considering a career in the civil service, I've not found particular issues with the democracy and comprehensivity values you raise so far.
On the democracy value - I see a simplified three-staged process for policy work and in my experience if you're working on an impactful policy area it often leaves lots of space for satisfying EA action:
- Provide the most thoughtful and careful advice you can given any institutional constraints you f
... (read more)