Will Howard

Software Engineer @ Centre for Effective Altruism
901 karmaJoined Working (0-5 years)London, UK

Bio

I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.

You can contact at will.howard@centreforeffectivealtruism.org

Comments
75

Topic contributions
45

I'm having trouble imagining what it would mean to have moral value without consciousness or sentience. Trying to put it together from the two posts you linked:

The definition of sentience from your post:

Sentience: a specific subset of phenomenal consciousness, subjective experiences with positive or negative valence. Pleasures like bodily pleasures and contentment have positive valence, and displeasures like pain or sadness have negative valence.

The key claim in Nico Delon's post:

Step 1. We can conceive of beings who lack sentience but whose lives are sites of valence;

Is the idea here that you can subtract off the experience part of sentience and keep the valence without having anyone to experience it (in the same way that "energy" is a physical property that doesn't require someone to experience it)? Or do you think about this in another way (such as including moral theories that are not valence-based)?

When searching just now I came across this quick take which argues for the exact opposite position in the Parfit example:

A life of just muzak and potatoes isn’t even close to being worth living. … Parfit’s general idea that a life that is barely worth living might be one with no pains and only very minor pleasures seems reasonable enough, but he should have realised that boredom and loneliness are severe pains in themselves.

It’s surprising how people’s intuitions differ on this! Although, I could salvage agreement with @JackM by saying that he’s supposing the boredom and loneliness are noticeably unpleasant and so this isn't a good example of a neutral state.

I think the intuition behind the muzak-and-potatoes example is thrown off by supposing you experience exactly the same things for your whole life, even imagining much more exciting music and tastier food as your only experience feels grotesque in a different way. But imagining being in a room with muzak and potatoes for a couple of hours seems fine.

The neutral point of wellbeing is often associated with a state of not much going on in terms of sensory stimulus, e.g. the Parfit “muzak and potatoes” vision of lives barely worth living. This seems natural, because it matches up zero (valenced) sensory input with zero net wellbeing. But there is actually no reason for these two to exactly coincide, it’s allowed for the mere lack of stimulation to feel mildly pleasant or unpleasant.

If the mere lack of stimulation feels pleasant, then the neutral point of wellbeing would correspond to what common sense might recognise as experiencing mild suffering, such as sitting on an uncomfortable chair but otherwise having all your needs attended to (and not sitting for long enough to become bored). And vice versa if the lack of stimulation feels unpleasant by default.

For me, recognising these two types of neutralness aren’t coupled together pushes in the direction of thinking of the neutral point of wellbeing as mild suffering, rather than “true neutralness” or mild pleasure. If I imagine a situation that is maximally neutral, like walking around a bland city not thinking of anything in particular, that feels comfortably inside life-worth-living territory (at least to do for a short time). If I try to imagine a situation that is borderline not worth experiencing, I find it hard to do without including some fairly bad suffering. Sitting in an aeroplane is the thing that springs to mind for this, but that is actively very uncomfortable.

Equating stimulation-neutralness and wellbeing-neutralness leads to being quick to declare lives as net negative, helped along by the fact that the extremes of suffering seem more intense than the extremes of pleasure.

You look at a gazelle and say “Well, it spends 80% of its time just wandering around grazing on grass (0 wellbeing points), 10% starving, being chased by predators, or being diseased in some way (-1000 wellbeing points), and 10% doing whatever gazelles do for fun (+500 wellbeing points)”, so it’s life is net negative overall. But it could be that the large amount of time animals spend doing fairly lowkey activities is quite positive, and I find this to be more intuitive than the other way around (where neutral activities are slightly negative).

I didn't find this paragraph to be off or particularly misleading fwiw.

It is roughly true (minus what they would have donated otherwise) when thinking in terms of counterfactual impact, and assuming you are an average pledger and would be inspiring other average pledgers (no expected difference in income or attrition, or effectiveness of charities donated to).

I think the caveats are sufficiently obvious that the reader could be expected to understand them on their own. For instance if you convince someone to donate $1000 it seems obvious that they should get most of the credit, but it still might be true that you were counterfactual in their decision.

This seems like the wrong order of magnitude to apply this logic at, $20mn is close to 1% of the money that OpenPhil has disbursed over its lifetime ($2.8b)

Thanks for reporting!

  • I'll think about how we could handle this one better. It's tricky because the doc itself as a title, and then people often rewrite the title as a heading inside the doc, so there isn't an obvious choice for what to use as the title. But it may be true that the heading case is a lot more common so we should make that the default.
  • That was indeed intended as a feature, because a lot of people use blank lines as a paragraph break. We can add that to footnotes too.

I'll set a reminder to reply here when we've done these.

Cosmologist: Well, I’m a little uncomfortable with this, but I’ll give it a shot. I will tentatively say that the odds of doom are higher than 1 in a googol. But I don’t know the order of magnitude of the actual threat. To convey this:

I’ll give a 1% chance it’s between 10^-100 and 10^-99

A 1% chance it’s between 10^-99 and 10^-98

A 1% chance it’s between 10^-98 and 10^-97,

And so on, all the way up to a 1% chance it’s between 1 in 10 and 100%.

I think the root of the problem in this paradox is that this isn't a very defensible humble/uniform prior, and if the cosmologist were to think it through more they could come up with one that gives a lower p(doom) (or at least, doesn't look much like the distribution stated initially).

So, I agree with this as a criticism of pop-Bayes in the sense that people will often come up with a quick uniform-prior-sounding explanation for why some unlikely event has a probability that is around 1%, but I think the problem here is that the prior is wrong[1] rather than failing to consider the whole distribution, seeing as a distribution over probabilities collapses to a single probability anyway.

Imo the deeper problem is how to generate the correct prior, which can be a problem due to "pop Bayes", but also remains when you try to do the actual Bayesian statistics.

Explanation of why I think this is quite an unnatural estimate in this case

Disclaimer: I too have no particular claim on being great at stats, so take this with a pinch of salt

The cosmologist is supposing a model where the universe as it exists is analogous to the result of a single Bernoulli trial, where the "yes" outcome is that the universe is a simulation that will be shut down. Writing this Bernoulli distribution as [2], they are then claiming uncertainty over the value of . So far so uncontroversial.

They then propose to take the pdf over  to be:

Where  is a normalisation constant. This is the distribution that results in the property that each OOM has an equal probability[3]. Questions about this:

  1. Is this the appropriate non-informative prior?
  2. Is this a situation where it's appropriate to appeal to a non-informative prior anyway?

Is this the appropriate non-informative prior?

I will tentatively say that the odds of doom are higher than 1 in a googol. But I don’t know the order of magnitude of the actual threat.

The basis on which the cosmologist chooses this model is an appeal to a kind of "total uncertainty"/non-informative-prior style reasoning, but:

  • They are inserting a concrete value of  as a lower bound
  • They are supposing the total uncertainty is over the order of magnitude of the probability, which is quite a specific choice

This results in a model where  in this case, so the expected probability is very sensitive to this lower bound parameter, which is a red flag for a model that is supposed to represent total uncertainty.

There is apparently a generally accepted way to generate non-informative-priors for parameters in statistical models, which is to use a Jeffreys prior. The Jeffreys prior[4] for the Bernoulli distribution is:

This doesn't look much like equation (A) that the cosmologist proposed. There are parameters where the Jeffreys prior is , such as the standard deviation in the normal distribution, but these tend to be scale parameters that can range from 0 to . Using it for a probability does seem quite unnatural when you contrast it with these examples, because a probability has hard bounds at 0 and 1.

Is this a situation where it's appropriate to appeal to a non-informative prior anyway?

Using the recommended non-informative prior (B), we get that the expected probability is 0.5. Which makes sense for the class of problems concerned with something that either happens or doesn't, where we are totally uncertain about this.

I expect the cosmologist would take issue with this as well, and say "ok, I'm not that uncertain". Some reasons he would be right to take issue are:

  1. A general prior that "out of the space of things that could be the case, most are not the case[5]", this should update the probability towards 0. And in fact massively so, such that in the absence of any other evidence you should think the probability is vanishingly small, as you would for the question of "Is the universe riding on the back of a giant turtle?"
  2. The reason to consider this simulation possibility in the first place, is not just that it is in principle allowed by the known laws of physics, but that there is a specific argument for why it should be the case. This should update the probability away from 0

The real problem the cosmologist has is uncertainty in how to incorporate the evidence of (2) into a probability (distribution). Clearly they think there is enough to the argument to not immediately reject it out of hand, or they would put it in the same category as the turtle-universe, but they are uncertain about how strong the argument actually is and therefore how much it should update their default-low prior.

...

I think this deeper problem gets related to the idea of non-informative priors in Bayesian statistics via a kind of linguistic collision.

Non-informative priors are about having a model which you have not yet updated based on evidence, so you are "maximally uncertain" about the parameters. In the case of having evidence only in the form of a clever argument, you might think "well I'm very uncertain about how to turn this into a probability, and the thing you do when you're very uncertain is use a non-informative prior". You might therefore come up with a model where the parameters have the kind of neat symmetry-based uncertainty that you tend to see in non-informative priors (as the cosmologist did in your example).

I think these cases are quite different though, arguably close to being opposites. In the second (the case of having evidence only in the form of a clever argument), the problem is not a lack of information, but that the information doesn't come in the form of observations of random variables. It's therefore hard to come up with a likelihood function based on this evidence, and so I don't have a good recommendation for what the cosmologist should say instead. But I think the original problem of how they end up with a 1 in 230 probability is due to a failed attempt to avoid this by appealing to an non-informative prior over order of magnitude.

  1. ^

    There is also a meta problem where the prior will tend to be too high rather than too low, because probabilities can't go below zero, and this leads to people on average being overly spooked by low probability events

  2. ^

     being the "true probability". I'm using  rather than p because 1) in general parameters of probability distributions don't need to be probabilities themselves, e.g. the mean of a normal distribution, 2)  is a random variable in this case, so talking about the probability of p taking a certain value could be confusing 3) it's what is used in the linked Wikipedia article on Jeffreys priors

  3. ^

  4. ^

    There is some controversy about whether this the right prior to use, but whatever the right one is it would give 

  5. ^

    For some things you can make a mutual exclusivity + uncertainty argument for why the probability should be low. E.g. for the case of the universe riding on the back of the turtle you could consider all the other types of animals it could be riding on the back of, and point out that you have no particular reason to prefer a turtle. For the simulation argument and various other cases it's trickier because they might be consistent with lots of other things, but you can still appeal to Occam's razor and/or viewing this as an empirical fact about the universe

Ok nested bullets should be working now :)

I have thought this might be quite useful to do. I would guess (people can confirm/correct me) a lot of people have a workflow like:

  1. Edit post in Google doc
  2. Copy into Forum editor, make a few minor tweaks
  3. Realise they want to make larger edits, go back to the Google doc to make these, requiring them to either copy over or merge together the minor tweaks they have made

For this case being able to import/export both ways would be useful. That said it's much harder to do the other way (we would likely have to build up the Google doc as a series of edits via the api, whereas in our case we can handle the whole post exported as html quite naturally), so I wouldn't expect us to do this in the near future unfortunately.

Yep images work, and agree that nested bullet points are the biggest remaining issue. I'm planning to fix that in the next week or two.

Edit: Actually I just noticed the cropping issue, images that are cropped in google docs get uncropped when imported. That's pretty annoying. There is no way to carry over the cropping but we could flag these to make sure you don't accidentally submit a post with the uncropped images.

Load more