All of JesseClifton's Comments + Replies

We at CLR are now using a different definition of s-risks.

New definition:

S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.

Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.

2
Stefan_Schubert
3y
I'm wondering a bit about this definition. One interpretation of it is that you're saying something like this: "The expected future suffering is X. The risk that event E occurs is an S-risk if and only if E occurring raises the expected future suffering significantly above X." But I think that definition doesn't work. Suppose that it is almost certain (99,9999999%) that a particular event E will occur, and that it would cause a tremendous amount of suffering. Then the expected future suffering is already very large (if I understand that concept correctly). And, because E is virtually certain to occur, it occurring will not actually bring about suffering in cosmically significant amounts relative to expected future suffering. And yet intuitively this is an S-risk, I'd say. Another interpretation of the definition is: "The expected future suffering is X. The risk that event E occurs is an S-risk if and only if the difference in suffering between E occurring and E not occurring is significant relative to X." That does take care of that issue, since, by hypothesis, the difference between E occurring and E not occurring is  a tremendous amount of suffering. Alternatively, you may want to say that the risk that E occurs is an S-risk if and only if occurring brings about a significant amount of suffering relative to what we expect to occur from other causes. That may be a more intuitive way of thinking about this. A feature of this definition is that the risk of an event E1 occurring can be S-risk even if it occurring would cause much less suffering than  another event E2 would, provided that E1 is much more likely to occur than E2. But if we increase our credence that E2 will occur, then the risk of E1 occurring will cease to be an S-risk, since it no longer will cause a significant amount of suffering relative to expected future suffering. I guess that some would find that unintuitive, and that something being an S-risk shouldn't depend on us adjusting our creden

I found it surprising that you wrote: …

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.

+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.

Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.

We want to pick a policy which, in some sense, has low regret... (read more)

3
MichaelStJules
3y
This makes sense to me, although I think we may not be able to assume a unique "true" model and prior even after all the time we want to think and use information that's already accessible. I think we could still have deep uncertainty after this; there might still be multiple distributions that are "equally" plausible, but no good way to choose a prior over them (with finitely many, we could use a uniform prior, but this still might seem wrong), so any choice would be arbitrary and what we do might depend on such an arbitrary choice. For example, how intense are the valenced experiences of insects and how much do they matter? I think no amount of time with access to all currently available information and thoughts would get me to a unique distribution. Some or most of this is moral uncertainty, too, and there might not even be any empirical fact of the matter about how much more intense one experience is than another (I suspect there isn't). Or, for the US election, I think there was little precedent for some of the considerations this election (how coronavirus would affect voting and polling), so thinking much more about them could have only narrowed the set of plausible distributions so much. I think I'd still not be willing to commit to a unique AI risk distribution with as much time as I wanted and perfect rationality but only the information that's currently accessible. See also this thread.

nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.

Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.

0
Anthony DiGiovanni
3y
Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

Some Bayesian statisticians put together prior choice recommendations. I guess what they call a "weakly informative prior" is similar to your "low-information prior".

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

2
kbog
6y
Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people's beliefs are. It doesn't say anything about variance in general. Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

whether you are Bayesian or not, it means that the estimate is robust to unknown information

I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?

subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.

Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?

2
kbog
6y
It means that your credence will change little (or a lot) depending on information which you don't have. For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence. On the other hand, suppose I don't talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not - the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won't update to 55% credence, I'll update to 51% or not at all. Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.

For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.

Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.

I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.

2
kbog
6y
Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information. No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don't see why you would think otherwise. No, everything that has been written on the optimizer's curse is perfectly compatible with subjective expected utility theory.

But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.

I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.

5
kbog
6y
The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.

I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a ... (read more)

4
kbog
6y
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse. It doesn't imply that taking the expected value is not the right solution to the idea of cluelessness.

I'll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.

Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.

I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.

2
Milan_Griffes
6y
Good point, I was implicitly considering s-risks as a subset of x-risks.

Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.

Right, I'm asking how useful or dangerous your (1) could be if it didn't have very good models of human psychology - and therefore didn't understand things like "humans don't want to be killed".

Great piece, thank you.

Regarding "learning to reason from humans", to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?

Of course, the motivation to act on human preferences is another matter - but I wonder if at least the capability comes by default?

0
Daniel_Dewey
7y
My guess is that the capability is extremely likely, and the main difficulties are motivation and reliability of learning (since in other learning tasks we might be satisfied with lower reliability that gets better over time, but in learning human preferences unreliable learning could result in a lot more harm).
0
WillPearson
7y
My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models. 1. Super intelligent agent with a specified goal 2. External brain lobe With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions. The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences. HRAD is explicitly about the first. I would like both to be explored.

Have animal advocacy organizations expressed interest in using SI's findings to inform strategic decisions? To what extent will your choices of research questions be guided by the questions animal advocacy orgs say they're interested in?

1
Jacy
7y
We've been in touch with most EAA orgs (ACE, OPP, ACE top/standout charities) and they have expressed interest. We haven't done many hard pitches so far like, "The research suggests X. We think you should change your tactics to reflect that, by shifting from Y to Z, unless you have evidence we're not aware of." We hope to do that in the future, but we are being cautious and waiting until we have a little more credibility and track record. We have communicated our findings in softer ways to people who seem to appreciate the uncertainty, e.g. "Well, our impression of this social movement is that it's evidence for Z tactics, but we haven't written a public report on that yet and it might change by the time we finish the case study." I (Jacy) would guess that our research-communication impact will be concentrated in that small group of animal advocacy orgs who are relatively eager to change their minds based on research, and perhaps in an even smaller group (e.g. just OPP and ACE). Their interests do influence us to a large extent, not just because it's where they're more open to changing their minds, but because we see them as intellectual peers. There are some qualifications we account for, such as SI having a longer-term focus (in my personal opinion, not sure they'd agree) than OPP's farmed animal program or ACE. I'd say that the interests of less-impact-focused orgs are only a small factor, since the likelihood of change and potential magnitude of change seem quite small.

Strong agreement. Considerations from cognitive science might also help us to get a handle on how difficult the problem of general intelligence is, and the limits of certain techniques (e.g. reinforcement learning). This could help clarify our thinking on AI timelines as well as the constraints which any AGI must satisfy. Misc. topics that jump to mind are the mental modularity debate, the frame problem, and insight problem solving.

This is a good article on AI from a cog sci perspective: https://arxiv.org/pdf/1604.00289.pdf

0
Kaj_Sotala
7y
Yay, correctly guessed which article that was before clicking on the link. :-)

Yes, I think you're right, at least when prices are comparable.

More quick Bayes: Suppose we have a Beta(0.01, 0.32) prior on the proportion of people who will pledge. I choose this prior because it gives a point-estimate of a ~3% chance of pledging, and a probability of ~95% that the chance of pledging is less than 10%, which seems prima facie reasonable.

Updating on your data using a binomial model yields a Beta(0.01, 0.32 + 14) distribution, which gives a point estimate of < 0.1% and a ~99.9% probability that the true chance of pledging is less than 10%.

Thanks for writing this up.

The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter (http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/) and social desirability bias.

For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like "This study shows that $1 donations to newspaper ... (read more)

...as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

I never meant to say that spreading anti-speciesism is the most important thing, just that it's still very important and it's not obvious that its relative value has changed with the election.

Trump may represent an increased threat to democratic norms and x-risk, but that doesn't mean the marginal value of working in those areas has changed. Perhaps it has. We'd need to see concrete examples of how EAs who previously had a comparative advantage in helping animals now can do better by working on these other things.

my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run

This may be true of massive systemic changes for an... (read more)

1
Ben Pace
7y
I'd like to see an analysis of exactly what the opportunity costs are there, before endorsing one. This analysis has no differential analysis, and as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

Agreed that large updates about things like the prevalence of regressive attitudes and the fragility of democracy should have been made before the election. But Trump's election itself has changed many EA-relevant parameters - international cooperation, x-risk, probability of animal welfare legislation, environmental policy, etc. So there may be room for substantial updates on the fact that Trump and a Republican Congress will be governing.

That said, it's not immediately obvious to me how the marginal value of any EA effort has changed, and I worry about major updates being made out of a kneejerk reaction to the horribleness of someone like Trump being elected.

I'd be interested to hear a case for moving from animal advocacy to politics. If your comparative advantage was in animal advocacy before the election, it's not immediately obvious to me that switching makes sense.

In the short term, animal welfare concerns dominate human concerns, and your marginal contribution to animal welfare via politics is unclear: welfare reform in the US is happening mostly through corporate reform, and it's dubious that progressive politics is even good for wild animals due to the possible harms of environmentalism.

Looking far... (read more)

8
Qiaochu_Yuan
7y
The case is for defending the conditions under which it's even possible to have a group of privileged people sitting around worrying about animal advocacy while the world is burning. To the extent that you think 1) Trump is a threat to democratic norms (as described e.g. by Julia Galef )/ risks nuclear war etc. and isn't just a herald of more conservative policy, and 2) most liberals galvanized by the threat of Trump are worrying more about the latter than the former, there's room for EAs to be galvanized by the threat of Trump in a more bipartisan way, as described e.g. by Paul Christiano. (In general, my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run, and that I find it very difficult to justify working directly to save animals now relative to working to help humans get their shit more together.)

Thank you for opening this discussion.

It’s not clear to me that animal advocacy in general gets downweighted:

-For the short term, wild and farmed animal welfare dominates human concerns. I'd be interested to hear a case that animals are better served by some EAs switching to progressive politics more generally. I'm doubtful that EA contributions to politics would indirectly benefit welfare reform and wild animal suffering efforts. Welfare reform in the United States is taking place largely through corporate reform. The impact of progressive vs conserva... (read more)

What do you mean by "too speculative"? You mean the effects of agriculture on wildlife populations are speculative? The net value of wild animal experience is unclear? Why not quantify this uncertainty and include it in the model? And is this consideration that much more speculative than the many estimates re: the far future on which your model depends?

Also, "I thought it was unlikely that I'd change my mind" is a strange reason for not accounting for this consideration in the model. Don't we build models in the first place because we don't trust such intuitions?

2
MichaelDickens
7y
I don't actually remember what I meant by "too speculative", that was probably not a helpful thing to say. There are thousands of semi-plausible things I could try to model. It would not be worth the effort to try to model all of them. I have to do some pre-prioritization about which things to model, so I pick the things that seem the most important. Specifically, I think about the expected value of modeling something (how likely am I to change my mind and how much does it matter if I do?) and how long it will take to see if it seems worth it. Sometimes I do this explicitly and sometimes implicitly. I don't really know how likely I am to change my mind, but I can make a rough guess. It wouldn't make sense to try to model thousands of things on the off chance that any of them could change my mind. If you would like to see how factory farming affects wild animal suffering, by all means create a cost-effectiveness estimate and share it.

Thanks for writing this up! Have you taken into account the effects of reductions in animal agriculture on wildlife populations? I didn't see terms for such effects in your cause prioritization app.

0
MichaelDickens
7y
I'm certainly aware of that consideration. I didn't include it because it seemed too speculative and not worth the effort (I thought it was unlikely that I'd change my mind about anything based on the result). I don't have a monopoly on cost-effectiveness calculations though, you can write your own, or even fork my code if you know a little C++.

It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.

This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I'm sure there's a lot more literature on this, that... (read more)

4[anonymous]8y
For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it's often taken for granted that x-risk reduction is net positive. I'd like to see more analysis on whether non-negative utilitarians should support x-risk reduction.
3
MichaelDello
8y
Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is. Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

Examining the foundations of the practical reasoning used (and seemingly taken for granted) by many EAs seems highly relevant. Wish we saw more of this kind of thing.

Have you seen Brian Tomasik's work on 1) the potential harms of environmentalism for wild animals, and 2) the effects of climate change on wild animal suffering?

e.g. http://reducing-suffering.org/climate-change-and-wild-animals/ http://reducing-suffering.org/applied-welfare-biology-wild-animal-advocates-focus-spreading-nature/

You don't think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?

As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don't you thi... (read more)

What do you think of the effort to end factory farming? Or Tomasik et al's work on wild animal suffering? Do you think these increase rather than decrease suffering?

-4
Ant_Colony
8y
I don't know how much they actually change. Everybody kind of agrees factory farms are evil but the behavior doesn't really seem to change much from that. Not sure the EA movement made much of a difference in this regard. As for wild animal suffering, there are ~5-10 people on the planet who care. The rest either doesn't care or cares about the opposite, conserving and/or expanding natural suffering. I am not aware of anything that could reasonably change that. Reducing "existential risk" will of course increase wild animal suffering as well as factory farming, and future equivalents.

I agree that EA as a whole doesn't have coherent goals (I think many EAs already acknowledge that it's a shared set of tools rather than a shared set of values). But why are you so sure that "it's going to cause much more suffering than it prevents"?

-1
Ant_Colony
8y
That was in reference to both humanity and the EA movement, but it's trivially true for the EA movement itself. Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars. We all know what that means for the total amount of future suffering. And yes, there will be some additional "flourishing" or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering. People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I'm not even convinced they're doing their own values a favor by endorsing the EA movement at this point.

Thanks a lot! I've made the correction you pointed out.

I'm not objecting to having moral uncertainty about animals. I'm objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say "It depends on how much you value them" rather than discussing how much we should value them.

I didn't intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals "is likely to be emotionally cha... (read more)

1
MichaelDickens
8y
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it's pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you're trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you'll just lose people.

I take issue with the statement "it depends greatly on how much you value a human compared to a nonhuman animal". Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read "it depends greatly on how much we ought to value a human compared to a nonhuman".

Imagine if EAs went around saying "it depends on how much you value an African relative to an American"... (read more)

6
Lila
8y
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren't strict utilitarians. EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
8
Peter Wildeford
8y
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of "moral uncertainty" when it comes to doing interspecies comparisons, whereas there'd be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?). For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I'm uncertain which one to take. Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.

I'm not saying any experiment is necessarily useless, but if MFA is going to spend a bunch of resources on another study they should use methods that won't exaggerate effectiveness.

And it's not only that "one should attend to priors in interpretation" - one should specify priors beforehand and explicitly update conditional on the data.

Confidence intervals still don't incorporate prior information and so give undue weight to large effects.

2
CarlShulman
8y
Sure, one should attend to priors in interpretation, but that doesn't make the experiment useless. If a pre-registered experiment reliably gives you a severalfold likelihood ratio, you can repeat it or scale it up and overcome significant prior skepticism (although limited by credence in hidden flaws).

I would be especially wary of conducting more studies if we plan on trying to "prove" or "disprove" the effectiveness of ads with so dubious a tool as null hypothesis significance tests.

Even if in a new study we were to reject the null hypothesis of no effect, this would arguably still be pretty weak evidence in favor of the effectiveness of ads.

1
CarlShulman
8y
What are you worried about here? The same studies will give confidence intervals on effect sizes, which are actionable, and reliable significance at a given sample size indicates an effect of a given magnitude..

As prohibitions on methods of animal exploitation - rather than just regulations which allow those forms of exploitation to persist if they're more "humane" - I think these are different than typical welfare reforms. As I say in the post, this is the position taken by abolitionist-in-chief Gary Francione in Rain Without Thunder.

Of course the line between welfare reform and prohibition is murky. You could argue that these are not, in fact, prohibitions on the relevant form of exploitation - namely, raising animals to be killed for food. But in... (read more)

I haven't seen much on welfare reforms in these industries in particular. In the 90s Sweden required that foxes on fur farms be able to express their natural behaviors, but this made fur farming economically unviable and it ended altogether...so I'm not sure what that tells us. Other than that, animals used in fur farming and cosmetics testing are/were subject to general EU animal welfare laws, and laws concerning farm and experimental animals, respectively.

I think welfare having no effect on abolition is a reasonable conclusion. I just want to argue that it isn't obviously counterproductive on the basis of this historical evidence.

Thanks for the comments!

"...we have evidence that welfare reforms lead to more welfare reforms, which might suggest someday they will get us to something close to animal rights, but I think Gary Francione's historical argument that we have had welfare reforms for two centuries without significant actual improvements is a bit stronger...."

My point is that welfare reforms have led not only to more welfare reforms, but prohibitions as well. Even if we disqualify bans on battery cages, veal crates, and gestation crates as prohibitions, there are sti... (read more)

3
zdgroff
8y
All good points, thanks. With the prohibitions on fur and cosmetic testing, do you know if they were preceded by welfare reforms in those respective industries or just in other industries? That could also be evidence for welfare reform leading to abolition but I don't know that it was the case. It seems likely to me that the effect of welfare reforms on abolition is likely zero, neither positive nor negative based on this evidence. Those are all good points about potential biases. Though note that incrementalism and welfarist are not the same. Abolitionists achieved geographical increments in city and state abolition and personal increments in manumission.

In the EU, prohibitions on battery cages, gestation crates, veal crates, and cosmetics testing, and the adoption of the Five Freedoms as a basis for animal welfare policy. In the UK, Austria, Netherlands, Croatia, & Bosnia & Herzegovina, bans on fur farming.

0
Lila
8y
"prohibitions on battery cages, gestation crates, veal crates" Those sound like welfare reforms?

Echo what Issa said. I've been working with Vipul to create articles on animal welfare and rights topics, and it's been a valuable experience. I've learned about Wikipedia, and more importantly I have learned a ton about the animal welfare/rights movement that will inform my own activism. I have already referred a lot to what I've learned and written about in conversations with other activists about what's effective. I think it's really good that now anyone will be able to easily access this information. Plus Vipul's great to work with.

Seems like you ought to conduct the analysis with all of the reasonable priors to see how robust your conclusions are, huh?

0
MichaelDickens
8y
Yeah, I've done some sensitivity analysis to see how the choice of prior affects results. I talk about this some in this essay. In my spreadsheet (which I haven't published yet but will soon), I calculate posteriors for both log-normal and Pareto priors.

"That's not what's happening here, because the case in question is an abstract discussion of a huge policy question regarding what stance we should take in the future, with little time pressure. These are precisely the areas where we should be consequentialist if ever we should be."

Most people's thinking is not nearly as targeted and consequentialist as this. On my model of human psychology, supporting the exploitation of animals in service of third-world development reinforces the belief that animals are for human benefit in general (rather tha... (read more)

3
zdgroff
8y
I answered some of the broader concerns above in my first reply, but I sympathize with Jesse's concern that promoting animal ownership in the developing world makes our support for animals seem unserious. I don't think it's that people look at us and say "hypocrites" or insufficiently absolutist but rather that they look at us and say "ahah, even they think it's okay to own animals, just not if you treat them badly."

I think adopting and spreading some deontic heuristics regarding the exploitation of animals is good from a consequentialist perspective. Presumably, EAs don't consider whether enslaving, murdering, and eating other humans "is for the greater good impartially considered". Even putting that on the table would make EA look much more heartless and crazy than it already does, and risk spreading some very dangerous memes. Likewise, not taking a firm stand against animal exploitation as a development tool makes EA seem less serious about helping animals, and reinforces the idea that animals are here to benefit humans.

0
David_Moss
8y
It's true that we should promote certain heuristics after careful consideration of the consequentialist impact of doing so. That's not what's going on here. Assuming Zach is defending the deontic side constraint picture, no consequentialist case for being deontic has been made for this. It's true we should follow certain heuristics in cases where we cannot properly assess the consequentialist case (per two level consequentialism) e.g. we should eschew lying to and stealing from our friends as a default rule, even when it might appear in a given case that it's consequentially justified, because we can't do a consequentialist calculation about everything all the time and if we try, in certain areas, it'll lead to disaster. That's not what's happening here, because the case in question is an abstract discussion of a huge policy question regarding what stance we should take in the future, with little time pressure. These are precisely the areas where we should be consequentialist if ever we should be. If we're actually consequentialists then the effects of the policy on the non-humans and the humans actually needs to be weighed and taken account. That's not what the OP seems to be doing. On the second interpretation, it seems to be saying we should never allow animals to be used in this way, regardless of the costs and benefits. As to the two considerations you mention: I don't find this to carry much weight. For almost everyone in the world, eating other humans is viewed very differently to eating animals and a fortiori to members of the global poor owning and raising a small number of animals. So the worry about "heartless and crazy" does not transfer from the human cannibalism to allowing the very poor to own livestock as assets, nor does it seem like this risks shifting people's norms (since almost everyone not on this forum endorses this anyway). I find it very unlikely that this is a serious consideration. Almost everyone is not a vegan, so purely in terms of

A few ideas-

-Consider getting people to think about improving the effectiveness of a cause they already care about first, rather than leading with cause prioritization.

-I see the point about the effectiveness of targeting secular people, but I worry about EA being excluded from mainstream thought in the long run due to this kind of strategy. Just something to think about more carefully.

-Perhaps there needs to be more discussion of effective advocacy as an individual. What is importance of charisma and other "soft" attributes that are difficult ... (read more)

0
Gleb_T
9y
Nice ideas! 1) On cause prioritization, I think there are already a number of ways that people can improve the effectiveness of their causes, and this is not the unique value proposition that the EA movement offers. I think what we offer that is unique is cause prioritization and data-driven evaluation, not improvement of other causes. I think we should stick to where we provide the most value. 2) I hear your point about the long run in targeting non-religious people, but I think we all see that developing countries - where the vast majority of donors are located - are turning more secular over time. Moreover, the kind of appeal that the EA movement has is most impactful for people who already value truth and reason. This is not to say we should not orient toward attracting religious people as well, just making an argument for effectiveness of outreach. If we want to be most effective in our outreach, I'd say targeting non-religious people is most impactful. 3) Yup, agreed. 4) Yup, agreed.