Are there any experiments offering sedatives to farmed or injured animals?
A friend mentioned to me experiments documented in Compassion, by the Pound in which farmed chickens (I think broilers?) prefer food with pain killers to food without pain killers. I thought this was super interesting as it provides more direct evidence about the subjective pain experienced by chickens than merely behavioural experiments, via a a plausible biological mechanism for detecting pain. This seems useful for identifying animals that experience pain.
Identifying some animals ...
This is a quickly written note that I don't expect to have time to polish.
This note aims to bound reasonable priors on the date and duration of the next technological revolution, based primarily on the timings of (i) the rise of homo sapiens; (ii) the Neolithic Revolution; (iii) the Industrial Revolution. In particular, the aim is to determine how sceptical our prior should be that the next technological revolution will take place this century and will occur v...
I'm happy to see more discussion of bargaining approaches to moral uncertainty, thanks for writing this! Apologies, this comment is longer than intended -- I hope you don't mind me echoing your Pascalian slogan!
My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent's credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset. Clearly, credences should play...
Another use of "consequentialism" in decision theory is in dynamic choice settings (i.e. where an agent makes several choices over time, and future choices and payoffs typically depend on past choices). Consequentialist decision rules depend only on the future choices and payoffs and decision rules that violate consequentialism in this sense sometimes depend on past choices.
An example: suppose an agent is deciding whether to take a pleasurable but addictive drug. If the agent takes the drug, they then decide whether to stop taking it or to continue taking ...
After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median.
I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes "typical" outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem mo...
I don't have a good object-level answer, but maybe thinking through this model can be helpful.
Big picture description: We think that a person's impact is heavy tailed. Suppose that the distribution of a person's impact is determined by some concave function of hours worked. We want that working more hours increases the mean of the impact distribution, and probably also the variance, given that this distribution is heavy-tailed. But we plausibly want that additional hours affect the distribution less and less, if we're prioritising perfectly (as Lukas sugge...
Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:
Prior that we should put weights on arguments and considerations: 60%
Pros:
- Clarifies the writer's perspective each of the considerations (65%)
- Allows for better discussion for reasons x, y, z... (75%)
Cons:
- Takes extra time (70%)
This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.
To see you can find the Bayes' factors, note that if ...
Good questions! It's a shame I don't have good answers. I remember finding Spencer Greenberg's framing helpful too but I'm not familiar with other useful practical framings, I'm afraid.
I suggested the Bayes' factor because it seems like a natural choice of the strength/weight of an argument but I don't find it super easy to reason about usually.
The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is ...
Nice post! I like the general idea and agree that a norm like this could aid discussions and clarify reasoning. I have some thoughts that I hope can build on this.
I worry that the (1-5) scale might be too simple or misleading in many cases though and it doesn't quite give us the most useful information. My first concern is that this looks like a cardinal scale (especially the way you calculate the output) but is it really the case that you should weigh arguments with score 2 twice as much as arguments with score 1 etc.? Some arguments might be much more th...
I think NunoSempere's answer is good and looking vNM utility should give you a clearer idea of where people are coming from in these discussions. I would also recommend the Stanford Encyclopedia of Philosophy's article on expected utility theory. https://plato.stanford.edu/entries/rationality-normative-utility/
You make an important and often overlooked point about the Long-Run Arguments for expected utility theory (described in the article above). You might find Christian Tarsney's paper, Exceeding Expectations, interesting and relevant. https://globalprio...
I found this really motivating and inspiring. Thanks for writing. I've always found the "great opportunity" framing of altruism stretched and not very compelling but I find this subtle reframing really powerful. I think the difference for me is the emphasis on the suffering of the drowning man and his family, whereas "great opportunity" framings typically emphasise how great it would be for YOU to be a hero and do something great. I prefer the appeal to compassion over ego.
I usually think more along Singerian obligation lines and this has led to unhealthy ...
My reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.
Agreed. I didn't mean to imply that totalism is the only view sensitive to the mortality-fertility relationship - just that the results could be fairly different on totalism and that it's especially important to see the results on totalism and that it makes sense to look at totalism before other population ethical views not yet considered. Exploring other population ethical views would be good too!
If parents are trying to have a set number of children (survive to adulthood) then the effects of reducing mortality might not change the total number...
You're very welcome! I really enjoyed reading and commenting on the post :)
One thing I can’t quite get my head round - if we divide E(C) by E(L) then don’t we lose all the information about the uncertainty in each estimate? Are we able to say that the value of averting a death is somewhere between X and Y times that to doubling consumption (within 90% confidence)?
Good question, I've also wondered this and I'm not sure. In principle, I feel like something like the standard error of the mean (the standard deviation of the sampl...
I wish this preference was more explicit in Founders Pledge's writing. It seems like a substantial value judgment, almost an aesthetic preference, and one that is unintuitive to me!
We don't say much about this because none of our conclusions depends on it but we'll be sure to be more explicit about this if it's decision-relevant. In the particular passage you're interested in here, we were trying to get a sense of the broader SWB benefits of psychedelic use. We didn't find strong evidence for positive effects on experiential...
Hi Milan, thanks very much for your comments (here and on drafts of the report)!
On 1, we don't intend to claim that psychedelics don't improve subjective well-being (SWB), just that the only study (we found) that measured SWB pre- and post-intervention found no effect. This is a (non-conclusive) reason to treat the findings that participants self-report improved well-being with some suspicion.
As I mentioned to you in our correspondence, we think that experiential measures, such as affective balance (e.g. as measured by Positive and Negative Affec...
Thanks for your questions, Siebe!
Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?
Yes, but only relatively briefly. You're right that these kinds of research are more neglected than studies of mental health treatments but we think that the benefits are much smaller in expectation. That's not to say that there couldn't be large benefits from microdosing or flow-through effects, just tha...
I don't think Greaves' example suffers the same problem actually - if we truly don't know anything about what the possible colours are (just that each book has one colour), then there's no reason to prefer {red, yellow, blue, other} over {red, yellow, blue, green, other}.
In the case of truly having no information, I think it makes sense to use Jeffreys prior in the box factory case because that's invariant to reparametrisation, so it doesn't matter whether the problem is framed in terms of length, area, volume, or some other parameterisation. I'm not sure what that actually looks like in this case though
yeah, these aren't great examples because there's a choice of partition which is better than the others - thanks for pointing this out. The problem is more salient if instead, you suppose that you have no information about how many different coloured marbles there are and ask what the probability of picking a blue marble is. There are different ways of partitioning the possibilities but no obviously privileged partition. This is how Hilary Greaves frames it here.
Another good example is van Fraassen's cube factory, e.g. described here.
Thanks for the clarification - I see your concern more clearly now. You're right, my model does assume that all balls were coloured using the same procedure, in some sense - I'm assuming they're independently and identically distributed.
Your case is another reasonable way to apply the maximum entropy principle and I think it's points to another problem with the maximum entropy principle but I think I'd frame it slightly differently. I don't think that the maximum entropy principle is actually directly problematic in the case y...
The maximum entropy principle does give implausible results if applied carelessly but the above reasoning seems very strange to me. The normal way to model this kind of scenario with the maximum entropy prior would be via Laplace's Rule of Succession, as in Max's comment below. We start with a prior for the probability that a randomly drawn ball is red and can then update on 99 red balls. This gives a 100/101 chance that the final ball is red (about 99%!). Or am I missing your point here?
Somewhat more formally, we're looking at a Bernoulli t...
An important difference between overall budgets and job boards is that budgets tell you how all the resources are spent whereas job boards just tell you how (some of) the resources are spent on the margin. EA could spend a lot of money on some area and/or employ lots of people to work in that area without actively hiring new people. We'd miss that by just looking at the job board.
I think this is a nice suggestion for getting a rough idea of EA priorities but because of this + Habryka's observation that the 80k job board is not representative of new jobs in and around EA, I'd caution against putting much weight on this.
I found the answers to this question on stats.stackexchange useful for thinking about and getting a rough overview of "uninformative" priors, though it's mainly a bit too technical to be able to easily apply in practice. It's aimed at formal Bayesian inference rather than more general forecasting.
In information theory, entropy is a measure of (lack of) information - high entropy distributions have low information. That's why the principle of maximum entropy, as Max suggested, can be useful.
Another meta answer is to use Jeffreys pr...
Reflecting on this example and your x-risk questions, this highlights the fact that in the beta(0.1,0.1) case, we're either very likely fine or really screwed, whereas in the beta(20,20) case, it's similar to a fair coin toss. So it feels easier to me to get motivated to work on mitigating the second one. I don't think that says much about which is higher priority to work on though because reducing the risk in the first case could be super valuable. The value of information narrowing uncertainty in the first case seems much higher though.
Nice post! Here's an illustrative example in which the distribution of matters for expected utility.
Say you and your friend are deciding whether to meet up but there's a risk that you have a nasty, transmissible disease. For each of you, there's the same probability that you have the disease. Assume that whether you have the disease is independent of whether your friend has it. You're not sure if has a beta(0.1,0.1) distribution or a beta(20,20) distribution, but you know that the expected value of is 0.5.
If you meet up, you get...
Thanks, this is a good criticism. I think I agree with the main thrust of your comment but in a bit of a roundabout way.
I agree that focusing on expected value is important and that ideally we should communicate how arguments and results affect expected values. I think it's helpful to distinguish between (1) expected value estimates that our models output and (2) the overall expected value of an action/intervention, which is informed by our models and arguments etc. The guesstimate model is so speculative that it doesn't actually do that much wor...
Thanks for raising this. It's a fair question but I think I disagree that the numbers you quote should be in the top level summary.
I'm wary of overemphasising precise numbers. We're really uncertain about many parts of this question and we arrived at these numbers by making many strong assumptions, so these numbers don't represent our all-things-considered-view and it might be misleading to state them without a lot of context. In particular, the numbers you quote came from the Guesstimate model, which isn't where the bulk of the wo...
Thanks! I appreciate your wariness of overemphasizing precise numbers and I agree that it is important to hedge your estimates in this way.
However, none of the claims in the bullet you cite give us any indication of the expected value of each intervention. For two interventions A and B, all of the following is consistent with the expected value of A being astronomically higher than the expected value of B:
Thanks for this. I think this stems from the same issue as your nitpick about AMF bringing about outcomes as good as saving lives of children under 5. The Founders Pledge Animal Welfare Report estimates that THL historically brought about outcomes as good as moving 10 hen-years from battery cages to aviaries per dollar, so we took this as our starting point and that's why this is framed in terms of moving hens from battery cages to aviaries. We should have been clearer about this though, to avoid suggesting that the only outcomes of THL are shifts from battery cages to aviaries.
Thanks for this comment, you raise a number of important points. I agree with everything you've written about QALYs and DALYs. We decided to frame this in terms of DALYs for simplicity and familiarity. This was probably just a bit confusing though, especially as we wanted to consider values of well-being (much) less than 0 and, in principle, greater than 1. So maybe a generic unit of hedonistic well-being would have been better. I think you're right that this doesn't matter a huge amount because we're uncertain over many orders of magni...
Yes, feeling much better now fortunately! Thanks for these thoughts and studies, Derek.
Given our time constraints, we did make some judgements relatively quickly but in a way that seemed reasonable for the purposes of deciding whether to recommend AfH. So this can certainly be improved and I expect your suggestions to be helpful in doing so. This conversation has also made me think it would be good to explore six monthly/quarterly/monthly retention rates rather than annual ones - thanks for that. :)
Our retention rates for StrongMinds were also based partly...
Thanks very much for this thoughtful comment and for taking the time to read and provide feedback on the report. Sorry about the delay in replying - I was ill for most of last week.
1. Yes, you're absolutely right. The current bounds are very wide and they represent extreme, unlikely scenarios. We're keen to develop probabilistic models in future cost-effectiveness analyses to produce e.g. 90% confidence intervals and carry out sensitivity analyses, probably using Guesstimate or R. We didn't have time to do so for this project but this is hig...
Here's another option that detects landmines with rats: https://www.apopo.org/en
Can't comment on cost-effectiveness compared to other similar organisations but it won a Skoll Award for Social Entrepreneurship in 2009 http://skoll.org/organization/apopo/ http://skoll.org/about/skoll-awards/ https://en.m.wikipedia.org/wiki/Skoll_Foundation#The_Skoll_Awards_for_Social_Entrepreneurship
Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.
Great -- I'm glad you agree!
I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven't thought about this loads though, so this opinion is not super robust.
Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) cre...
I'm making a fresh comment to make some different points. I think our earlier thread has reached the limit of productive discussion.
I think your theory is best seen as a metanormative theory for aggregating both well-being of existing agents and the moral preferences of existing agents. There are two distinct types of value that we should consider:
prudential value: how good a state of affairs is for an agent (e.g. their level of well-being, according to utilitarianism; their priority-weighted well-being, according to prioritarianism).
moral value: how good ...
I'm not entirely sure what you mean by 'rigidity', but if it's something like 'having strong requirements on critical levels', then I don't think my argument is very rigid at all. I'm allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents ch...
Thanks for the reply!
I agree that it's difficult to see how to pick a non-zero critical level non-arbitrarily -- that's one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it's plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people's well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, ...
Nice post! I enjoyed reading this but I must admit that I'm a bit sceptical.
I find your variable critical level utilitarianism troubling. Having a variable critical level seems OK in principle, but I find it quite bizarre that moral patients can choose what their critical value is i.e. they can choose how morally valuable their life is. How morally good or bad a life is doesn't seem to be a matter of choice and preferences. That's not to say people can't disagree about where the critical level should be, but I don't see why this disagreement should reflect...
Interesting – thanks for sharing. Yes, agreed on all of this