Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)
It sounds like you reject this kind of thinking:
cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
I don't think that's unreasonable. Personally, I strongly have the intuition expressed in that quote, though definitely not certain that I will endorse it on reflection.
Wouldn't the better response be to find things we aren't clueless about
The background assumption in this post is that there are n...
Thanks, Ben!
It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)
But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’...
Some reasons why animal welfare work seems better:
Thanks for this! IMO thinking about what it even means to do good under extreme uncertainty is still underrated.
I don’t see how this post addresses the concern about cluelessness, though.
My problem with the construction analogy is: Our situation is more like, whenever we place a brick we might also be knocking bricks out of other parts of the house. Or placing them in ways that preclude good building later. So we don’t know if we’re actually contributing to the construction of the house on net.
On your takeaway at the bottom, it seems to be: “if someone doi...
We at CLR are now using a different definition of s-risks.
New definition:
S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.
Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.
I found it surprising that you wrote: …
Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.
+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.
Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.
We want to pick a policy which, in some sense, has low regret...
nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.
Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.
Some Bayesian statisticians put together prior choice recommendations. I guess what they call a "weakly informative prior" is similar to your "low-information prior".
Nice comment; I'd also like to see a top-level post.
One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.
Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.
I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”
It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.
My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.
whether you are Bayesian or not, it means that the estimate is robust to unknown information
I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?
subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.
Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?
For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.
Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.
I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.
I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.
I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.
I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a ...
I'll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.
Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.
I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.
Great piece, thank you.
Regarding "learning to reason from humans", to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?
Of course, the motivation to act on human preferences is another matter - but I wonder if at least the capability comes by default?
Strong agreement. Considerations from cognitive science might also help us to get a handle on how difficult the problem of general intelligence is, and the limits of certain techniques (e.g. reinforcement learning). This could help clarify our thinking on AI timelines as well as the constraints which any AGI must satisfy. Misc. topics that jump to mind are the mental modularity debate, the frame problem, and insight problem solving.
This is a good article on AI from a cog sci perspective: https://arxiv.org/pdf/1604.00289.pdf
More quick Bayes: Suppose we have a Beta(0.01, 0.32) prior on the proportion of people who will pledge. I choose this prior because it gives a point-estimate of a ~3% chance of pledging, and a probability of ~95% that the chance of pledging is less than 10%, which seems prima facie reasonable.
Updating on your data using a binomial model yields a Beta(0.01, 0.32 + 14) distribution, which gives a point estimate of < 0.1% and a ~99.9% probability that the true chance of pledging is less than 10%.
Thanks for writing this up.
The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter (http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/) and social desirability bias.
For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like "This study shows that $1 donations to newspaper ...
...as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."
I never meant to say that spreading anti-speciesism is the most important thing, just that it's still very important and it's not obvious that its relative value has changed with the election.
Trump may represent an increased threat to democratic norms and x-risk, but that doesn't mean the marginal value of working in those areas has changed. Perhaps it has. We'd need to see concrete examples of how EAs who previously had a comparative advantage in helping animals now can do better by working on these other things.
my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run
This may be true of massive systemic changes for an...
Agreed that large updates about things like the prevalence of regressive attitudes and the fragility of democracy should have been made before the election. But Trump's election itself has changed many EA-relevant parameters - international cooperation, x-risk, probability of animal welfare legislation, environmental policy, etc. So there may be room for substantial updates on the fact that Trump and a Republican Congress will be governing.
That said, it's not immediately obvious to me how the marginal value of any EA effort has changed, and I worry about major updates being made out of a kneejerk reaction to the horribleness of someone like Trump being elected.
I'd be interested to hear a case for moving from animal advocacy to politics. If your comparative advantage was in animal advocacy before the election, it's not immediately obvious to me that switching makes sense.
In the short term, animal welfare concerns dominate human concerns, and your marginal contribution to animal welfare via politics is unclear: welfare reform in the US is happening mostly through corporate reform, and it's dubious that progressive politics is even good for wild animals due to the possible harms of environmentalism.
Looking far...
Thank you for opening this discussion.
It’s not clear to me that animal advocacy in general gets downweighted:
-For the short term, wild and farmed animal welfare dominates human concerns. I'd be interested to hear a case that animals are better served by some EAs switching to progressive politics more generally. I'm doubtful that EA contributions to politics would indirectly benefit welfare reform and wild animal suffering efforts. Welfare reform in the United States is taking place largely through corporate reform. The impact of progressive vs conserva...
What do you mean by "too speculative"? You mean the effects of agriculture on wildlife populations are speculative? The net value of wild animal experience is unclear? Why not quantify this uncertainty and include it in the model? And is this consideration that much more speculative than the many estimates re: the far future on which your model depends?
Also, "I thought it was unlikely that I'd change my mind" is a strange reason for not accounting for this consideration in the model. Don't we build models in the first place because we don't trust such intuitions?
It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.
This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I'm sure there's a lot more literature on this, that...
Have you seen Brian Tomasik's work on 1) the potential harms of environmentalism for wild animals, and 2) the effects of climate change on wild animal suffering?
e.g. http://reducing-suffering.org/climate-change-and-wild-animals/ http://reducing-suffering.org/applied-welfare-biology-wild-animal-advocates-focus-spreading-nature/
You don't think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?
As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don't you thi...
I'm not objecting to having moral uncertainty about animals. I'm objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say "It depends on how much you value them" rather than discussing how much we should value them.
I didn't intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals "is likely to be emotionally cha...
I take issue with the statement "it depends greatly on how much you value a human compared to a nonhuman animal". Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read "it depends greatly on how much we ought to value a human compared to a nonhuman".
Imagine if EAs went around saying "it depends on how much you value an African relative to an American"...
I'm not saying any experiment is necessarily useless, but if MFA is going to spend a bunch of resources on another study they should use methods that won't exaggerate effectiveness.
And it's not only that "one should attend to priors in interpretation" - one should specify priors beforehand and explicitly update conditional on the data.
I would be especially wary of conducting more studies if we plan on trying to "prove" or "disprove" the effectiveness of ads with so dubious a tool as null hypothesis significance tests.
Even if in a new study we were to reject the null hypothesis of no effect, this would arguably still be pretty weak evidence in favor of the effectiveness of ads.
As prohibitions on methods of animal exploitation - rather than just regulations which allow those forms of exploitation to persist if they're more "humane" - I think these are different than typical welfare reforms. As I say in the post, this is the position taken by abolitionist-in-chief Gary Francione in Rain Without Thunder.
Of course the line between welfare reform and prohibition is murky. You could argue that these are not, in fact, prohibitions on the relevant form of exploitation - namely, raising animals to be killed for food. But in...
I haven't seen much on welfare reforms in these industries in particular. In the 90s Sweden required that foxes on fur farms be able to express their natural behaviors, but this made fur farming economically unviable and it ended altogether...so I'm not sure what that tells us. Other than that, animals used in fur farming and cosmetics testing are/were subject to general EU animal welfare laws, and laws concerning farm and experimental animals, respectively.
I think welfare having no effect on abolition is a reasonable conclusion. I just want to argue that it isn't obviously counterproductive on the basis of this historical evidence.
Thanks for the comments!
"...we have evidence that welfare reforms lead to more welfare reforms, which might suggest someday they will get us to something close to animal rights, but I think Gary Francione's historical argument that we have had welfare reforms for two centuries without significant actual improvements is a bit stronger...."
My point is that welfare reforms have led not only to more welfare reforms, but prohibitions as well. Even if we disqualify bans on battery cages, veal crates, and gestation crates as prohibitions, there are sti...
In principle the proposal in that post is supposed to encompass a larger set of bracketing-ish things than the proposal in this post, e.g., bracketing out reasons that are qualitatively weaker in some sense. But the latter kind of thing isn't properly worked out.