Will Howard🔹

Software Engineer @ Centre for Effective Altruism
921 karmaJoined Working (0-5 years)London, UK

Bio

I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.

You can contact me at will.howard@centreforeffectivealtruism.org

Comments
80

Topic contributions
45

Hi Oscar, thanks for flagging that bug, I'll look into it and reply here when it's fixed.

To answer your other question: The "reads" number should be pretty reliable, we don't tend to get bots that load the page for long enough to trigger the necessary timer events (or deliberately fake these events). The views number is more likely to be off, as there are some bots that can trigger the page view event.

Ah thanks, I didn't know we had that feature. In that case we should be able to fix this when importing, I'll get back to you when it's done

Unfortunately we don't support aligning text to the centre/right in our editor, so we won't be able to fix this any time soon. Sorry about that.

left/right

Are you sure the text was aligned to the right? I wouldn't expect that to be possible

I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.

Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people's minds etc).

Hi Vasco, both of these bugs should be fixed now :)

I'm having trouble imagining what it would mean to have moral value without consciousness or sentience. Trying to put it together from the two posts you linked:

The definition of sentience from your post:

Sentience: a specific subset of phenomenal consciousness, subjective experiences with positive or negative valence. Pleasures like bodily pleasures and contentment have positive valence, and displeasures like pain or sadness have negative valence.

The key claim in Nico Delon's post:

Step 1. We can conceive of beings who lack sentience but whose lives are sites of valence;

Is the idea here that you can subtract off the experience part of sentience and keep the valence without having anyone to experience it (in the same way that "energy" is a physical property that doesn't require someone to experience it)? Or do you think about this in another way (such as including moral theories that are not valence-based)?

When searching just now I came across this quick take which argues for the exact opposite position in the Parfit example:

A life of just muzak and potatoes isn’t even close to being worth living. … Parfit’s general idea that a life that is barely worth living might be one with no pains and only very minor pleasures seems reasonable enough, but he should have realised that boredom and loneliness are severe pains in themselves.

It’s surprising how people’s intuitions differ on this! Although, I could salvage agreement with @JackM by saying that he’s supposing the boredom and loneliness are noticeably unpleasant and so this isn't a good example of a neutral state.

I think the intuition behind the muzak-and-potatoes example is thrown off by supposing you experience exactly the same things for your whole life, even imagining much more exciting music and tastier food as your only experience feels grotesque in a different way. But imagining being in a room with muzak and potatoes for a couple of hours seems fine.

The neutral point of wellbeing is often associated with a state of not much going on in terms of sensory stimulus, e.g. the Parfit “muzak and potatoes” vision of lives barely worth living. This seems natural, because it matches up zero (valenced) sensory input with zero net wellbeing. But there is actually no reason for these two to exactly coincide, it’s allowed for the mere lack of stimulation to feel mildly pleasant or unpleasant.

If the mere lack of stimulation feels pleasant, then the neutral point of wellbeing would correspond to what common sense might recognise as experiencing mild suffering, such as sitting on an uncomfortable chair but otherwise having all your needs attended to (and not sitting for long enough to become bored). And vice versa if the lack of stimulation feels unpleasant by default.

For me, recognising these two types of neutralness aren’t coupled together pushes in the direction of thinking of the neutral point of wellbeing as mild suffering, rather than “true neutralness” or mild pleasure. If I imagine a situation that is maximally neutral, like walking around a bland city not thinking of anything in particular, that feels comfortably inside life-worth-living territory (at least to do for a short time). If I try to imagine a situation that is borderline not worth experiencing, I find it hard to do without including some fairly bad suffering. Sitting in an aeroplane is the thing that springs to mind for this, but that is actively very uncomfortable.

Equating stimulation-neutralness and wellbeing-neutralness leads to being quick to declare lives as net negative, helped along by the fact that the extremes of suffering seem more intense than the extremes of pleasure.

You look at a gazelle and say “Well, it spends 80% of its time just wandering around grazing on grass (0 wellbeing points), 10% starving, being chased by predators, or being diseased in some way (-1000 wellbeing points), and 10% doing whatever gazelles do for fun (+500 wellbeing points)”, so it’s life is net negative overall. But it could be that the large amount of time animals spend doing fairly lowkey activities is quite positive, and I find this to be more intuitive than the other way around (where neutral activities are slightly negative).

I didn't find this paragraph to be off or particularly misleading fwiw.

It is roughly true (minus what they would have donated otherwise) when thinking in terms of counterfactual impact, and assuming you are an average pledger and would be inspiring other average pledgers (no expected difference in income or attrition, or effectiveness of charities donated to).

I think the caveats are sufficiently obvious that the reader could be expected to understand them on their own. For instance if you convince someone to donate $1000 it seems obvious that they should get most of the credit, but it still might be true that you were counterfactual in their decision.

This seems like the wrong order of magnitude to apply this logic at, $20mn is close to 1% of the money that OpenPhil has disbursed over its lifetime ($2.8b)

Load more