BS

Ben Stevenson

Animal welfare researcher/advocate
1071 karmaJoined London, UK

Comments
87

That's a really interesting blog, Vasco. Worth its own post.

Thanks Toby. The winning topic was, "Countering democratic backsliding is now a more urgent issue than more traditional longtermist concerns"?

I feel like this doesn't violate your rules, because it's discussing something "directly connected to cause EA areas". Democracy isn't a top three cause area, but it's not not a cause area. In any case, in my opinion the rules are obviously well-intentioned and mostly helpful, but at the end of the day it would be fine to bend them here.

Hey Vasco, thanks. I have also thought about this but don’t have a clear answer. Each credible grant opportunity will get a shallow dive but it’s hard to reliably estimate a total time budget before we have a good sense how many people will apply. Happy to share post-hoc reflections afterwards.

Thanks for the reply. I agree with your specific point but I think it’s worth being more careful with your phrasing. How much we earn is an ethically-charged thing, and it’s not a good thing if EA’s relationship with AI companies gives us a permission structure to lose sight of this.

Edit: to be clear, I agree that “it’s probably not  much more than he could earn elsewhere” but disagree that “Eliezer isn’t necessarily out of line in drawing $600k”

$235K is not very much money. […] $600K is also not much money.


This is false.

Thanks Jim, very interesting. I also feel conflicted, but lean towards taking A.[1]

Here's how I feel about that:

  1. Bracketing feels strange when it asks us to be led by consequences which are small in the grand scheme (e.g., +/- $1; Emily's shoulder), and set aside consequences which are fairly proximate and which clearly dominate the stakes (e.g., +/- <=$1000; killing the terrorist/kid). It doesn't feel so strange when our decision procedure calls on us to set aside consequences which dominate the stakes but don't feel so proximate (e.g., longtermist concerns).
  2. When I look at very specific cases, I can find it hard to tell when I'm dealing with standard expected value under uncertainty, and when I've run into Knightian uncertainty, cluelessness, etc. I'm bracketing out +/- <=$1000 when I say I take A, but I do feel drawn to treating this as a normal distribution around $0.

Ways in which it's disaalogous to animals that might be important:

  1. Animal welfare isn't a one-shot problem. I think the best things we can do for animals involve calculated bets that integrate concern for their welfare into our decision-making more consistently, and teach us about improving their welfare more reliably.
  2. I'm not sure we should be risk-neutral maximisers for animal welfare.
  1. ^

    Conditional on being a risk neutral maximiser who values money linearly. In the real world, I'd shy away from A due to ambiguity aversion and because, to me, -$1000 matters more than +$1000.

This is great! Really exciting

Look forward to reading this blog, Mark!

Great stuff! How much of the chickens money went to broilers, and how much to hens? It's mostly cage-free, right?

Thanks Daniel. This looks incredibly important. Huge kudos to AWL for setting up this website and coordinating a response.

When I click "email now", the mailto: function doesn't work for me. (I think this is an issue with my computer settings, not the website). Can you send me the suggested text?

Also, how helpful is it for people outside Africa to write in? I imagine it's still helpful, especially for animal welfare experts, but I want to sense-check that with you.

Load more