Jakub Stencel

Director of Global Development @ Anima International
Working (6-15 years of experience)
491Kraków, PolskaJoined Feb 2019

Participation
2

  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group

Comments
19

(Disclaimer: I'm from an animal advocacy group and working in the field for over 10 years.)

Just a point on how the footage from farms is representative, based on your point about not trusting them.

I think you are correct to be skeptical to some of the claims made by documentaries, I feel like some are exaggerating and trying to increase the weight of the claims to make the documentary more appealing. Apart from my personal problem with bending the truth, it's also, I quite confidently think, a bad long-term strategy for the movement. But it highly depends on the filmmaker.

But I really want to note that it's very hard to convey the message to the public about the conditions that animals live in. You may expect that the more brutal footage the better, but it's not the case. We do investigations without knowledge of farm owners (you can check our footage here - https://animainternational.org/resources/investigations - and use it if needed!) and very often we have to use the less inhumane conditions, because our data shows that on average most people are not receptive to faithfully brutal material. It has to be the milder content with enough context for people to sympathize with animals. So you may expect "cherry-picking" in a different direction that you are worried about in terms of them being representative.

There is also an unsurprising problem of not understanding specie welfare needs and animals not showing their suffering in a human's perception compatible way (especially if these are not mammals), so you may see a picture of an animal without any wounds, but it may be in a great suffering because of behavioral needs deprivation (example - repetitive behavior). This is very hard to convey.

So for me, quantitative assessments of suffering between species in farming conditions is the best tool to understand whether animals suffer and to which degree. But I'll add personally that there is an intuition that you get by working with footage/being on farms/working in field that is sometimes hard to capture just by looking at literature (kinda in a similar direction as a point about ground visits when distributing bednets made here). I also wonder how measurement is skewing the results sometimes.

Generally, my bet is that the more data we will get, the more it will show animals suffer more than we expected. My very strong view is that there is sufficient information and it's mostly due to biases that make us discriminate needs of other beings welfare that some people remain undecided on this issue (i.e. we treat interests of non-similar beings to us as less important - animals, future people, digital minds, etc. based on evolutionary heuristics instead of reasoning). That is, unless someone has Yudkowsky's view of sentience with which I strongly disagree (or to be more correct - I disagree with my understanding of that view), but seems logical and coherent to me.

Thank you again for this work and posting it on EA Forums. I love the presentation of the research summary.

I was considering writing something similar, but investing time in writing a post is hard, so thank you for writing this.

While I understand emotions we all feel, I'm under an impression that effective altruism community is now reacting in a way that they try to educate public NOT to - to use anecdotal examples and emotional states to guide the decision making. While it's very human to react in such a way I see that I'm growing more and more anxious of this direction observing the discourse.

Of course what happened is devastating and that there is a lot of constructive feedback and good questions to be asked, but I hope we can preserve our commitment to compassion and truth-seeking even in hard moments like this. The things that worry me can be seen in how demanding people are to the EA leaders - to the point of almost blaming them for this without enough evidence or sympathy. And that even not that good proposals earn a lot of support because I feel we are all incentivizied to be critical of EA (and it's good to feel better about ourselves by distancing ourselves from the misery that FTX caused).

I'm also worried about EA doing things now (both community here and established orgs) to influence optics rather than for the sake of integrity. It would be worrying if true, because if so it may lead to the recreating potentials errors, like people silencing themselves instead of having good-faith arguments. I hope that accountability will prevail and both community and people will be open where we screwed up and, if needed, there will be consequences, instead of us protecting the brand of EA at any cost.

But I want to mention that I'm also incredibly impressed by some people here and generally very happy to consider myself part of this community. I admire the courage, integrity and sobriety of thinking of many people here. After spending recently way more time on EA Forums than I should, I came to conclusion that I would want to especially mention Habryka for his behavior and comments during the last period . It's really a privilege to have such people in EA community (and I'm really sorry for not mentioning other people behaving in a similar way who I didn't notice).

I applaud that you wrote how you feel against social incentives.

It seems to me that the main way for our community to avoid allowing future devastating mistakes like with SBF/FTX is to have more posts like this and norms that encourage dissenting opinions and go against hype (anti-hype?).

Especially if it's true that people had heard rumors about some problems or had some reasons to act on pieces of information in regards to SBF character, but silenced themselves. Punishing socially these kind of posts seems like recreating the environment for such moral and truth-seeking failure.

On a relevant note, it's a bit problematic that main posts don't have disagree voting though, because maybe people vote on whether they agree and don't necessary want to punish you for expressing your feelings.

This is very helpful and transparent.

Thank you for sharing this with community and emphasizing the role of integrity for effective altruists.

I think this is a good point in itself to distinguish domestication from exploitation (and I upvoted it for this), but I think it doesn't necessarily address what the comment about exploitation is pointing at.

I believe that the argument is that any use of animals in an efficient way will lead to industrialization of breeding, farming, etc. and it's hard then to align incentives to make the results net positive for both humans and other species. At least I believe we have an extremely poor track record here.

I really enjoyed your frankness.

From reading what you wrote I have a suspicion that you may not be a bad person. I don’t want to impose anything on you and I don’t know you, but from the post you seem mainly to be ambitious and have a high level of metacognition. Although it’s possible that you are narcissistic and I’m being swayed by your honesty.

When it comes to being “bad” - have you read Reducing long-term risks from malevolent actors? It discusses at length what it means to be a bad actor. You may want to see how much of these applies to you. Note that these traits are on dimension and have to be somewhat prevalent in population due to increasing genes fitness in certain contexts, so it’s about quantity.

Regarding status. I would be surprised if a significant portion of EAs or even the majority is not status-driven. My understanding is that status is a fundamental human motive. This is not a claim whether it’s good or bad, but rather pointing out that there may be a lot of selfish motivations here. In fact, I think what effective altruism nailed is hacking status in a way that is optimal for the world - you gain status the more intellectually honest you are and the more altruistic you are which seems to be a self correcting system to me.

Personally, I have seen a lot of examples of people who are highly altruistic / altruistic at first glance / passing a lot of purity tests that were optimizing for self-serving outcomes when having a choice, sometimes leading to catastrophic outcomes for their groups in the long term. I have also seen at least a dozen examples of people who broadcast strong signals of their character to be exposed as heavily immoral. This also is in accordance to what the post about malevolent actors points:

Such individuals might even deliberately display personality characteristics entirely at odds with their actual personality. In fact, many dictators did precisely that and portrayed themselves—often successfully—as selfless visionaries, tirelessly working for the greater good (e.g., Dikötter, 2019).

So, it seems to me that the real question is whether:

  • your output is negative (including n-order effects),
  • you are not able to override your self-serving incentives when there is a misalignment with the community.

So, I second what was mentioned by NunoSempere that what you [are able to] optimize for is an important question.

Personally, when hiring, the one of the things that scares me the most are people of low integrity that can sacrifice organizational values and norms for a personal gain (e.g. sabotaging psychological safety to be liked, sabotaging others to have more power, avoiding truth-seeking because of personal preferences, etc.). So basically people who do not stand up to their ideals (or reported ideals) - again with a caveat that it’s about some balance and not 100% purity - we all have our shortcomings.

In my view, a good question to ask yourself (if you are able to admit it to yourself) is whether you have a track record of integrity - respecting certain norms even if they do not serve you. For example, I think it’s easy to observe in modern days by watching yourself playing games - do you respect fair play, do you have a respect for rules, do you cheat or have a desire to cheat, do you celebrate wins of others (especially competitors), etc. I think it can be a good proxy for real world games. Or recalling how you behaved toward others and ideals when you were in a position of power. I think this can give you an idea of what you are optimizing for.

I also heavily recommend topics that explore virtues / values for utilitarians to see if following some proposals resonates with you, especially Virtues for Real-World Utilitarians by Stefan Schubert and Lucius Caviola.

Thanks for the answer. I think I got it more and I find the reasoning convincing, but in the end it seems to be then quite dependent on the context.

I find what you said optimal in not-so-ideal psychological safety environment, but with teams high in psychological safety it's not really about things you listed, like

Unfair-feeling criticism

but rather truth-seeking approach to make sure we are really elevating the person. For this two-sided communication performs better.

Anecdotally, from my perspective in public feedback rounds it's not so much defense, but more like "I think you are onto something, but consider this... ". Which seems to me a bit more productive and optimal for the person than just listening. Then the two models can inform each other. For an extreme outcome example on one of such rounds in a team - one person criticized public speaking skills of one person and said the person should speak more. But after discussion we all agreed that it was not a good strength to invest in for that person and their comparative advantage lies elsewhere, so in the end it's not a good feedback. So the giver was missing some crucial considerations that indeed changed that person's feedback. I found it way more productive than I would find a one-sided communication. I also think if it's done with compassion and intent to help each other then it shouldn't break the atmosphere.

But after your and Amy's answers, I get now that it's a bit different environment that Doom Circle aims to create. It seems to me that Doom Circle requires less vulnerability thanks to these rules which makes sense, especially for less psychologically safe teams. So this seems good for people that know each other less.

I really admire you have shared personal examples. Makes this way more tangible.

I see you that in the description wrote that you should only say "thank you", but isn't it sometimes a bit risky to not discuss the feedback?

It seems that someone's model of you may be quite off because they miss some context that you have or because of their biases. For example reading the feedback you've received made me think that some of that could be quite distorted by preferences of the giver.

Or do you do discuss it later on?

Load More