All of AidanGoth's Comments + Replies

Interesting – thanks for sharing. Yes, agreed on all of this

Are there any experiments offering sedatives to farmed or injured animals?

A friend mentioned to me experiments documented in Compassion, by the Pound in which farmed chickens (I think broilers?) prefer food with pain killers to food without pain killers. I thought this was super interesting as it provides more direct evidence about the subjective pain experienced by chickens than merely behavioural experiments, via a a plausible biological mechanism for detecting pain. This seems useful for identifying animals that experience pain.

Identifying some animals ... (read more)

7
saulius
Invertebrate sentience table (introduced here) has "Self-administers analgesics" as one of the features potentially indicative of phenomenal consciousness. But it's only filled for honey bees, chickens, and humans. I agree that more such experiments would be useful. It's more directly tied to what we care about (qualia) than most experiments. I think that animals might not eat painkillers until they are unconscious out of their survival instinct. There are substances that act as painkillers in nature, and the trait "eat it until you're unconscious" would be selected against by natural selection. But if they would eat it until unconscious, that would provide good evidence that their lives are worse than non-existence.

Interesting. Thanks for sharing :)

Thanks for sharing. Fyi, I'm getting a "Page not found" error because of the "." at the end of the link. (But once I remove the full stop, it works fine.)

3
freedomandutility
Thanks for pointing that out, fixed!

The next technological revolution could come this century and could last less than a decade

This is a quickly written note that I don't expect to have time to polish.

Summary

This note aims to bound reasonable priors on the date and duration of the next technological revolution, based primarily on the timings of (i) the rise of homo sapiens; (ii) the Neolithic Revolution; (iii) the Industrial Revolution. In particular, the aim is to determine how sceptical our prior should be that the next technological revolution will take place this century and will occur v... (read more)

3
aog
Very cool. You may have seen this but Robin Hanson makes a similar argument in this paper. 

I'm happy to see more discussion of bargaining approaches to moral uncertainty, thanks for writing this! Apologies, this comment is longer than intended -- I hope you don't mind me echoing your Pascalian slogan!

My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent's credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset. Clearly, credences should play... (read more)

2
MichaelPlant
I didn't really explain myself here, but there might be better vs worse regress problems. I haven't worked out my thoughts enough yet to write something useful. 
2
MichaelPlant
Agree the distinction would be tightened up. And yes, important bit seems to be whether agents will just 'do their own thing' vs consider moral trade (and moral 'trade wars')
2
MichaelPlant
I don't really disagree. However, as I stated, my purpose was to give people 'a feel' for the view I doubt they would get from Greaves and Cotton-Barrett's paper (and I certainly didn't get when I did). The idea was to sketch a 'quick-and-dirty' version of the view to see if it was worth doing with greater precision. 
9
MichaelPlant
I don't think I understand the thinking here. It seems fairly natural to say "I am 80% confident in theory A, so that gets 80% of my resources, etc.", and then to think about what would happen after that.  It's not intuitive to say "I am 80% confident in utilitarianism, that gets 80% 'bargaining power'". But I accept it's an open question, if we want to do something internal bargaining, what the best version of that is.  I do mention the challenge of the disagreement point (see footnote 7). Again, I agree that this is the sort of thing that merits further inquiry. I'm not sold on the 'random dictator point', which, if I understood correctly, is identical to running a lottery where each theory has a X% chance of getting their top choice (where X% represents your credence in that theory). I note in part of section 2 that bargaining agents will likely think it preferable, by their own lights, to bargain over time rather than resolve things with lotteries. It's for this reason I'm also inclined to prefer a 'moral marketplace' over a 'moral parliament': the former is what the sub-agents would themselves prefer. 
3
MichaelPlant
Hello Aidan. Thanks for all of these, much food for thought. I'll reply in individual comments to make this more manageable.

Another use of "consequentialism" in decision theory is in dynamic choice settings (i.e. where an agent makes several choices over time, and future choices and payoffs typically depend on past choices). Consequentialist decision rules depend only on the future choices and payoffs and decision rules that violate consequentialism in this sense sometimes depend on past choices.

An example: suppose an agent is deciding whether to take a pleasurable but addictive drug. If the agent takes the drug, they then decide whether to stop taking it or to continue taking ... (read more)

After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median.

I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes "typical" outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem mo... (read more)

I don't have a good object-level answer, but maybe thinking through this model can be helpful.

Big picture description: We think that a person's impact is heavy tailed. Suppose that the distribution of a person's impact is determined by some concave function of hours worked. We want that working more hours increases the mean of the impact distribution, and probably also the variance, given that this distribution is heavy-tailed. But we plausibly want that additional hours affect the distribution less and less, if we're prioritising perfectly (as Lukas sugge... (read more)

3
AidanGoth
After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median. I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes "typical" outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem more applicable to the median than the mean. This suggests that the median is roughly logarithmic in hours worked. It would then require the mean to be exponential in the median for the mean to be linear in hours worked, in which case, working 20% less would lose exactly 20% of the expected impact (more if the mean is more convex than exponential in the median, less if it's less than exponential). In the simple example above, the mean is linear in the median, so the mean is logarithmic in hours worked if the median is. But the lognormal distribution might not be heavy-tailed enough, so I wouldn't put too much weight on this. Looking at the pareto distribution, it seems to be the case that the mean is sometimes more than exponential in the median -- it's less convex for small values and more convex for high values . You'd have to a bit of work to figure out the scale and whether it's more than exponential over the relevant range, but it could turn out that expected impact is convex in hours worked in this model, which would suggest working 20% less would lose more than 20% of the value. I'm not sure how well the pareto distribution describes the median though (it seems good for heavy tails but bad for the whole distribution of things), so it might be better to look at something like a lognormal body with a pareto tail. But maybe that's getting too complicated to be worth it. This seems like an interesting and impo
1[anonymous]
Thanks Aidan, I'll consider this model when doing any more thinking on this. 

Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:

Prior that we should put weights on arguments and considerations: 60%

Pros:

  • Clarifies the writer's perspective each of the considerations (65%)
  • Allows for better discussion for reasons x, y, z... (75%)

Cons:

  • Takes extra time (70%)

This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.

To see you can find the Bayes' factors, note that if ... (read more)

2
SeanEngelhart
Sorry for my very slow response! Thanks--this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together. (I had heard about the "Question of Evidence," which I described above, from looking at a snippet of the podcast's transcript, but hadn't actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)

Good questions! It's a shame I don't have good answers. I remember finding Spencer Greenberg's framing helpful too but I'm not familiar with other useful practical framings, I'm afraid.

I suggested the Bayes' factor because it seems like a natural choice of the strength/weight of an argument but I don't find it super easy to reason about usually.

The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is ... (read more)

1
SeanEngelhart
Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.

Nice post! I like the general idea and agree that a norm like this could aid discussions and clarify reasoning. I have some thoughts that I hope can build on this.

I worry that the (1-5) scale might be too simple or misleading in many cases though and it doesn't quite give us the most useful information. My first concern is that this looks like a cardinal scale (especially the way you calculate the output) but is it really the case that you should weigh arguments with score 2 twice as much as arguments with score 1 etc.? Some arguments might be much more th... (read more)

2
SeanEngelhart
Great point! I understand the high-level idea behind priors and updating, but I'm not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn't feel super helpful... I'm guessing you don't mean formally applying the equations, but instead doing it in a more approximate or practical way? I've heard Spencer Greenberg's description of the "Question of Evidence" (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?
Answer by AidanGoth19
0
0

I think NunoSempere's answer is good and looking vNM utility should give you a clearer idea of where people are coming from in these discussions. I would also recommend the Stanford Encyclopedia of Philosophy's article on expected utility theory. https://plato.stanford.edu/entries/rationality-normative-utility/

You make an important and often overlooked point about the Long-Run Arguments for expected utility theory (described in the article above). You might find Christian Tarsney's paper, Exceeding Expectations, interesting and relevant. https://globalprio... (read more)

3
Ben Esche
Thank you very much - I'm part way through Christian Tarsney's paper and definitely am finding it interesting. I'll also have a go at Hilary Greaves piece. Listening to her on 80,000 hours' podcast was one thing that contributed to asking this question. She seems (at least there) to accept EV as the obviously right decision criterion, but a podcast probably necessitates simplifying her views!

I found this really motivating and inspiring. Thanks for writing. I've always found the "great opportunity" framing of altruism stretched and not very compelling but I find this subtle reframing really powerful. I think the difference for me is the emphasis on the suffering of the drowning man and his family, whereas "great opportunity" framings typically emphasise how great it would be for YOU to be a hero and do something great. I prefer the appeal to compassion over ego.

I usually think more along Singerian obligation lines and this has led to unhealthy ... (read more)

My reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.

2
Gordon Seidoh Worley
Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

Agreed. I didn't mean to imply that totalism is the only view sensitive to the mortality-fertility relationship - just that the results could be fairly different on totalism and that it's especially important to see the results on totalism and that it makes sense to look at totalism before other population ethical views not yet considered. Exploring other population ethical views would be good too!


If parents are trying to have a set number of children (survive to adulthood) then the effects of reducing mortality might not change the total number
... (read more)
3
MichaelPlant
Okay, we're on the same page on all of this. :) A further specific empirical project would involve trying to understand population dynamics in the locations EAs are considering.

This is a great summary of what I was and wasn't saying :)

Thanks for the link - looking forward to reading. Might return to this after reading

You're very welcome! I really enjoyed reading and commenting on the post :)

One thing I can’t quite get my head round - if we divide E(C) by E(L) then don’t we lose all the information about the uncertainty in each estimate? Are we able to say that the value of averting a death is somewhere between X and Y times that to doubling consumption (within 90% confidence)?

Good question, I've also wondered this and I'm not sure. In principle, I feel like something like the standard error of the mean (the standard deviation of the sampl... (read more)

I wish this preference was more explicit in Founders Pledge's writing. It seems like a substantial value judgment, almost an aesthetic preference, and one that is unintuitive to me!

We don't say much about this because none of our conclusions depends on it but we'll be sure to be more explicit about this if it's decision-relevant. In the particular passage you're interested in here, we were trying to get a sense of the broader SWB benefits of psychedelic use. We didn't find strong evidence for positive effects on experiential... (read more)

2
Milan Griffes
Yes, I haven't looked closely but it seems like a complicated topic. Pollmann-Schult 2018 thinks that the having kids<>life satisfaction relationship depends a lot on the context:
2
Milan Griffes
As far as I can tell, experiential and eudaimonic well-being converge in the limit, but it's important to prioritize eudaimonic well-being along the way to avoid premature optimization. e.g. Jhanic states are more hedonic than cocaine or Twitter, but also more difficult to access.

Hi Milan, thanks very much for your comments (here and on drafts of the report)!

On 1, we don't intend to claim that psychedelics don't improve subjective well-being (SWB), just that the only study (we found) that measured SWB pre- and post-intervention found no effect. This is a (non-conclusive) reason to treat the findings that participants self-report improved well-being with some suspicion.

As I mentioned to you in our correspondence, we think that experiential measures, such as affective balance (e.g. as measured by Positive and Negative Affec... (read more)

2
Milan Griffes
Further elaboration of the rescaling hypothesis and Griffiths et al. 2006 here: https://enthea.net/founders-pledge-report-psychedelics-and-subjective-wellbeing.html
2
Milan Griffes
The rescaling hypothesis and the "no effect from psilocybin-assisted therapy" hypothesis both would explain the "no change in PANAS" result. It seems you're favoring the "no effect" hypothesis. The rescaling hypothesis seems more concordant with other results from Griffiths et al. 2006: * Participants reported an increase in subjective well-being * Community observers noted an improvement in participant attitudes Something like the rescaling hypothesis also fits better with my experience, fwiw.
2
Milan Griffes
I wish this preference was more explicit in Founders Pledge's writing. It seems like a substantial value judgment, almost an aesthetic preference, and one that is unintuitive to me! e.g. favoring affective balance over life satisfaction implies that having children is a bad decision in terms of one's subjective well-being. (If I recall correctly, on average having kids tends to make affective balance go down but life satisfaction go up; many people seem very happy to have had children.)

I've hopefully clarified this in my response to your first comment :)

Thanks for your questions, Siebe!

Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?

Yes, but only relatively briefly. You're right that these kinds of research are more neglected than studies of mental health treatments but we think that the benefits are much smaller in expectation. That's not to say that there couldn't be large benefits from microdosing or flow-through effects, just tha... (read more)

I don't think Greaves' example suffers the same problem actually - if we truly don't know anything about what the possible colours are (just that each book has one colour), then there's no reason to prefer {red, yellow, blue, other} over {red, yellow, blue, green, other}.

In the case of truly having no information, I think it makes sense to use Jeffreys prior in the box factory case because that's invariant to reparametrisation, so it doesn't matter whether the problem is framed in terms of length, area, volume, or some other parameterisation. I'm not sure what that actually looks like in this case though

1
MaxRa
Hm, but if we don't know anything about the possible colours, the natural prior to assume seems to me to give all colors the same likelihood. It seems arbitrary to decide to group a subsection of colors under the label "other", and pretend like it should be treated like a hypothesis on equal footing with the others in your given set, which are single colors. Yeah, Jeffreys prior seems to make sense here.

yeah, these aren't great examples because there's a choice of partition which is better than the others - thanks for pointing this out. The problem is more salient if instead, you suppose that you have no information about how many different coloured marbles there are and ask what the probability of picking a blue marble is. There are different ways of partitioning the possibilities but no obviously privileged partition. This is how Hilary Greaves frames it here.

Another good example is van Fraassen's cube factory, e.g. described here.

1
MaxRa
Thanks a lot for the pointers! Greaves' example seems to suffer the same problem, though, doesn't it? We have information about the set and distribution of colors, and assigning 50% credence to the color red does not use that information. The cube factory problem does suffer less from this, cool! I wonder if one should simply model this hierarchically, assigning equal credence to the idea that the relevant measure in cube production is side length or volume. For example, we might have information about cube bottle customers that want to fill their cubes with water. Because the customers vary in how much water they want to fit in their cube bottles, it seems to me that we should put more credence into partitioning it according to volume. Or if we'd have some information that people often want to glue the cubes under their shoes to appear taller, the relevant measure would be the side length. Currently, we have no information like this, so we should assign equal credence to both measures.

Thanks for the clarification - I see your concern more clearly now. You're right, my model does assume that all balls were coloured using the same procedure, in some sense - I'm assuming they're independently and identically distributed.

Your case is another reasonable way to apply the maximum entropy principle and I think it's points to another problem with the maximum entropy principle but I think I'd frame it slightly differently. I don't think that the maximum entropy principle is actually directly problematic in the case y... (read more)

1
tobycrisford 🔸
I think I disagree with your claim that I'm implicitly assuming independence of the ball colourings. I start by looking for the maximum entropy distribution within all possible probability distributions over the 2^100 possible colourings. Most of these probability distributions do not have the property that balls are coloured independently. For example, if the distribution was a 50% probability of all balls being red, and 50% probability of all balls being blue, then learning the colour of a single ball would immediately tell you the colour of all of the others. But it just so happens that for the probability distribution which maximises the entropy, the ball colourings do turn out to be independent. If you adopt the maximum entropy distribution as your prior, then learning the colour of one tells you nothing about the others. This is an output of the calculation, rather than an assumption. I think I agree with your last paragraph, although there are some real problems here that I don't know how to solve. Why should we expect any of our existing knowledge to be a good guide to what we will observe in future? It has been a good guide in the past, but so what? 99 red balls apparently doesn't tell us that the 100th will likely be red, for certain seemingly reasonable choices of prior. I guess what I was trying to say in my first comment is that the maximum entropy principle is not a solution to the problem of induction, or even an approximate solution. Ultimately, I don't think anyone knows how to choose priors in a properly principled way. But I'd very much like to be corrected on this.

The maximum entropy principle does give implausible results if applied carelessly but the above reasoning seems very strange to me. The normal way to model this kind of scenario with the maximum entropy prior would be via Laplace's Rule of Succession, as in Max's comment below. We start with a prior for the probability that a randomly drawn ball is red and can then update on 99 red balls. This gives a 100/101 chance that the final ball is red (about 99%!). Or am I missing your point here?

Somewhat more formally, we're looking at a Bernoulli t... (read more)

2
tobycrisford 🔸
I think I disagree that that is the right maximum entropy prior in my ball example. You know that you are drawing balls without replacement from a bag containing 100 balls, which can only be coloured blue or red. The maximum entropy prior given this information is that every one of the 2^100 possible colourings {Ball 1, Ball 2, Ball 3, ...} -> {Red, Blue} is equally likely (i.e. from the start the probability that all balls are red is 1 over 2^100). I think the model you describe is only the correct approach if you make an additional assumption that all balls were coloured using an identical procedure, and were assigned to red or blue with some unknown, but constant, probability p. But that is an additional assumption. The assumption that the unknown p is the same for each ball is actually a very strong assumption. If you want to adopt the maximum entropy prior consistent with the information I gave in the set-up of the problem, you'd adopt a prior where each of the 2^100 possible colourings are equally likely. I think this is the right way to think about it anyway. The re-paremetrisation example is very nice though, I wasn't aware of that before.

An important difference between overall budgets and job boards is that budgets tell you how all the resources are spent whereas job boards just tell you how (some of) the resources are spent on the margin. EA could spend a lot of money on some area and/or employ lots of people to work in that area without actively hiring new people. We'd miss that by just looking at the job board.

I think this is a nice suggestion for getting a rough idea of EA priorities but because of this + Habryka's observation that the 80k job board is not representative of new jobs in and around EA, I'd caution against putting much weight on this.

5[comment deleted]

The latex isn't displaying well (for me at least!) which makes this really hard to read. You just need to press 'ctrl'/'cmd' and '4' for inline latex and 'ctrl'/'cmd' and 'M' for block :)

Answer by AidanGoth15
0
0

I found the answers to this question on stats.stackexchange useful for thinking about and getting a rough overview of "uninformative" priors, though it's mainly a bit too technical to be able to easily apply in practice. It's aimed at formal Bayesian inference rather than more general forecasting.

In information theory, entropy is a measure of (lack of) information - high entropy distributions have low information. That's why the principle of maximum entropy, as Max suggested, can be useful.

Another meta answer is to use Jeffreys pr... (read more)

2
MaxRa
I'm confused about the partition problem you linked to. Both examples in that post seem to be instances where in one partition available information is discarded. Answer 1. seems to simply discard information about the algorithm that produces the result, i.e. that it depends on the color of the marbles. The same holds for the other example in the blogpost, where the information about the number of possible planets is ignored in one partition.

Reflecting on this example and your x-risk questions, this highlights the fact that in the beta(0.1,0.1) case, we're either very likely fine or really screwed, whereas in the beta(20,20) case, it's similar to a fair coin toss. So it feels easier to me to get motivated to work on mitigating the second one. I don't think that says much about which is higher priority to work on though because reducing the risk in the first case could be super valuable. The value of information narrowing uncertainty in the first case seems much higher though.

1
matthewp
Nice example, I see where you're going with that. I share the intuition that the second case would be easier to get people motivated for, as it represents more of a confirmed loss. However, as your example shows actually the first case could lead to an 'in it together' effect on co-ordination. Assuming the information is taken seriously. Which is hard as, in advance, this kind of situation could encourage a 'roll the dice' mentality.

Nice post! Here's an illustrative example in which the distribution of matters for expected utility.

Say you and your friend are deciding whether to meet up but there's a risk that you have a nasty, transmissible disease. For each of you, there's the same probability that you have the disease. Assume that whether you have the disease is independent of whether your friend has it. You're not sure if has a beta(0.1,0.1) distribution or a beta(20,20) distribution, but you know that the expected value of is 0.5.

If you meet up, you get... (read more)

2
AidanGoth
Reflecting on this example and your x-risk questions, this highlights the fact that in the beta(0.1,0.1) case, we're either very likely fine or really screwed, whereas in the beta(20,20) case, it's similar to a fair coin toss. So it feels easier to me to get motivated to work on mitigating the second one. I don't think that says much about which is higher priority to work on though because reducing the risk in the first case could be super valuable. The value of information narrowing uncertainty in the first case seems much higher though.

Thanks, this is a good criticism. I think I agree with the main thrust of your comment but in a bit of a roundabout way.

I agree that focusing on expected value is important and that ideally we should communicate how arguments and results affect expected values. I think it's helpful to distinguish between (1) expected value estimates that our models output and (2) the overall expected value of an action/intervention, which is informed by our models and arguments etc. The guesstimate model is so speculative that it doesn't actually do that much wor... (read more)

Thanks for raising this. It's a fair question but I think I disagree that the numbers you quote should be in the top level summary.

I'm wary of overemphasising precise numbers. We're really uncertain about many parts of this question and we arrived at these numbers by making many strong assumptions, so these numbers don't represent our all-things-considered-view and it might be misleading to state them without a lot of context. In particular, the numbers you quote came from the Guesstimate model, which isn't where the bulk of the wo... (read more)

Thanks! I appreciate your wariness of overemphasizing precise numbers and I agree that it is important to hedge your estimates in this way.

However, none of the claims in the bullet you cite give us any indication of the expected value of each intervention. For two interventions A and B, all of the following is consistent with the expected value of A being astronomically higher than the expected value of B:

  • B is better than A in most of the most plausible scenarios
  • On most models the difference in cost-effectiveness is small (within 1 or 2 orders of magnitude
... (read more)

Thanks for this. I think this stems from the same issue as your nitpick about AMF bringing about outcomes as good as saving lives of children under 5. The Founders Pledge Animal Welfare Report estimates that THL historically brought about outcomes as good as moving 10 hen-years from battery cages to aviaries per dollar, so we took this as our starting point and that's why this is framed in terms of moving hens from battery cages to aviaries. We should have been clearer about this though, to avoid suggesting that the only outcomes of THL are shifts from battery cages to aviaries.

9
saulius
Note that (unless I missed something) your animal welfare report commits this same minor mistake of assuming that all hens used by companies that made cage-free commitments were in battery cages. While I think that's true for the majority of hens, some of them were already in cage-free systems, and some were in enriched cages. But this is more than outweighed by some very conservative assumptions. E.g., that THL's work only moved policies forward by 1 year or something like that. So it's no big deal :)

Thanks for this comment, you raise a number of important points. I agree with everything you've written about QALYs and DALYs. We decided to frame this in terms of DALYs for simplicity and familiarity. This was probably just a bit confusing though, especially as we wanted to consider values of well-being (much) less than 0 and, in principle, greater than 1. So maybe a generic unit of hedonistic well-being would have been better. I think you're right that this doesn't matter a huge amount because we're uncertain over many orders of magni... (read more)

Yes, feeling much better now fortunately! Thanks for these thoughts and studies, Derek.

Given our time constraints, we did make some judgements relatively quickly but in a way that seemed reasonable for the purposes of deciding whether to recommend AfH. So this can certainly be improved and I expect your suggestions to be helpful in doing so. This conversation has also made me think it would be good to explore six monthly/quarterly/monthly retention rates rather than annual ones - thanks for that. :)

Our retention rates for StrongMinds were also based partly... (read more)

Yes, we had physical health problems in mind here. I appreciate this isn't clear though - thanks for pointing out. Indeed, we are aware of the underestimation of the badness of mental health problems and aim to take this into account in future research in the subjective well-being space.

Thanks very much for this thoughtful comment and for taking the time to read and provide feedback on the report. Sorry about the delay in replying - I was ill for most of last week.

1. Yes, you're absolutely right. The current bounds are very wide and they represent extreme, unlikely scenarios. We're keen to develop probabilistic models in future cost-effectiveness analyses to produce e.g. 90% confidence intervals and carry out sensitivity analyses, probably using Guesstimate or R. We didn't have time to do so for this project but this is hig... (read more)

2
Derek
Thanks Aidan! Hope you're feeling better now. Most of your comments sound about right. On retention rates: Your general methods seem to make sense, since one would expect gradual tapering off of benefits, but your inputs seem even more optimistic than I originally thought. I'm not sure Strong Minds is a great benchmark for retention rates, partly because of the stark differences in context (rural Uganda vs UK cities), and partly because IIRC there were a number of issues with SM's study, e.g. a non-randomised allocation and evidence of social desirability bias in outcome measurement, plus of course general concerns related to the fact it was a non-peer-reviewed self-evaluation. Perhaps retention rates of effects from UK psychotherapy courses of similar duration/intensity would be more relevant? But I haven't looked at the SM study for about a year, and I haven't looked into other potential benchmarks, so perhaps yours was a sensible choice. Also not a great benchmark in a UK context, but Haushofer and colleagues recently did a study* of Problem Management+ in Uganda that found no benefits at the end of a year (paper forthcoming), even though it showed effectiveness at the 3 month mark in a previous study in Kenya. *Haushofer, J., Mudida, R., & Shapiro, J. (2019). The Comparative Impact of Cash Transfers and Psychotherapy on Psychological and Economic Well-being. Working Paper. Available upon request.

Here's another option that detects landmines with rats: https://www.apopo.org/en

Can't comment on cost-effectiveness compared to other similar organisations but it won a Skoll Award for Social Entrepreneurship in 2009 http://skoll.org/organization/apopo/ http://skoll.org/about/skoll-awards/ https://en.m.wikipedia.org/wiki/Skoll_Foundation#The_Skoll_Awards_for_Social_Entrepreneurship

Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.

Great -- I'm glad you agree!

I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven't thought about this loads though, so this opinion is not super robust.

Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) cre... (read more)

I'm making a fresh comment to make some different points. I think our earlier thread has reached the limit of productive discussion.

I think your theory is best seen as a metanormative theory for aggregating both well-being of existing agents and the moral preferences of existing agents. There are two distinct types of value that we should consider:

prudential value: how good a state of affairs is for an agent (e.g. their level of well-being, according to utilitarianism; their priority-weighted well-being, according to prioritarianism).

moral value: how good ... (read more)

1[anonymous]
I very much agree with these points you make. About choice dependence: I'll leave that up to every person for themselves. For example, if everyone strongly believes that the critical levels should be choice set independent, then fine, they can choose independent critical levels for themselves. But the critical levels indeed also reflect moral preferences, and can include moral uncertainty. So for example someone with a string credence in total utilitarianism might lower his or her critical level and make it choice set independent. About the extreme preferences: I suggest people can choose a normalization procedure, such as variance normalization (cfr Owen-Cotton Barrett (http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf) and here: https://stijnbruers.wordpress.com/2018/06/06/why-i-became-a-utilitarian/ "It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases." Indeed!

I'm not entirely sure what you mean by 'rigidity', but if it's something like 'having strong requirements on critical levels', then I don't think my argument is very rigid at all. I'm allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents ch... (read more)

0[anonymous]
I honestly don't see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?) With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I'll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That's fine, but we should accept the freedom of others not to do so.

Thanks for the reply!

I agree that it's difficult to see how to pick a non-zero critical level non-arbitrarily -- that's one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it's plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people's well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, ... (read more)

0[anonymous]
I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here. <0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born. If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1). In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>> So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility -10

Nice post! I enjoyed reading this but I must admit that I'm a bit sceptical.

I find your variable critical level utilitarianism troubling. Having a variable critical level seems OK in principle, but I find it quite bizarre that moral patients can choose what their critical value is i.e. they can choose how morally valuable their life is. How morally good or bad a life is doesn't seem to be a matter of choice and preferences. That's not to say people can't disagree about where the critical level should be, but I don't see why this disagreement should reflect... (read more)

1[anonymous]
Thanks for your comments “In particular, you'll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.” But choosing a mind-independent critical level seems difficult. By what other means could we determine a critical level? And why should that critical level be the same for everyone and the same in all possible situations? If we can’t find an objective rule to select a universal and constant critical level, picking a critical level introduces an arbitrariness. This arbitrariness can be avoided by letting everyone choose for themselves their own critical levels. If I choose 5 as my critical level, and you choose 10 for your critical level, these choices are in a sense also arbitrary (e.g. why 5 and not 4?) but at least they respect our autonomy. Furthermore: I argued elsewhere that there is no predetermined universal critical level: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/ “If you don't allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that's more absurd than the sadistic repugnant conclusion, so you need some constraints.” I don’t think you are free to choose a negative critical level, because that would mean you would be ok to have a negative utility, and by definition that is something you cannot want. If your brain doesn’t like pain, you are not free to choose that from now on you like pain. And if your brain doesn’t want to be altered such that it likes pain, you are not free to choose to alter your brain. Neither are you free to invert your utility function, for example. “You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to