Joseph_Chu

An eccentric dreamer and servant of good. Chinese Canadian Christian Liberal and Effective Altruist.

Posts

Sorted by New

Comments

January Open Thread

A possible explanation is simply that the truth tends to be some information that may or may not be useful. It might, with a small probability, be very useful information, like say, life saving information. The ambiguity of the question means that while you may not be happy with the information, it could conceivably benefit others greatly or not at all. On the other hand, guaranteed happiness is much more certain and concrete. At least, that's the way I imagine it.

I've had at least one person explain their choice as being a matter of truth being harder to get than happiness, because they could always figure out a way to be happy by themselves.

January Open Thread

Well, the way the question is formed, there are a number of different tendencies that this question seems to help gauge. One is obviously whether an individual is aware of the difference between instrumental and terminal goals. Another would be what kinds of sacrifices they are willing to make, as well as their degree of risk aversion. In general, I find most people answer truth, but that when faced with an actual situation of this sort, tend to show a preference for happiness.

So far I'm less certain about if particular groups actually answer it one way or another. It seems like cautious, risk averse types favour Happiness, while risk neutral or risk seeking types favour Truth. My sample size is a bit small to make such generalizations though.

Probably the most important understanding I get from this question is just what kind of decision process people use to decide situations of ambiguity and uncertainty, as well as how decisive they are.

January Open Thread

So, I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. I've also posted this question to the Less Wrong open thread, but I'm curious what Effective Altruists in particular would think about this question. If you'd rather you can private message me your answer. Keep in mind the question is intentionally somewhat ambiguous.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

Blind Spots: Compartmentalizing

I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well?

What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically have?

So, wouldn't the actual number of QALYs be more like +/- Infinity, where the sign of the value is based on whether or not the average life has more net happiness than suffering, and as such, is considered worth living?

Thus, it seems like the question of abortion can be encompassed in the question of suicide, and whether or not to perpetuate or end life generally.

Blind Spots: Compartmentalizing

I also posted this comment at Less Wrong, but I guess I'll post it here as well...

As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.

The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion, when in fact, we can take this much more nuanced approach.

My own, pre-EA views are nuanced to the extent that I view personhood as something that goes from 0 before conception, to 1 at birth, and gradually increases in between the two. This satisfies certain facts of pregnancy, such as that twins can form after conception and we don't consider each twin part of a single "person", but rather two "persons". Thus, I am inclined to think that personhood cannot begin at conception. On the other hand, infanticide arguments notwithstanding, it seems clear to me that a mature baby both one second before, and one second after it is born, is a person in the sense that it is a viable human being capable of feeling conscious experiences.

I've also considered the neuroscience research that suggests that fetuses in the womb as far back as 20 weeks in are capable of memorizing the music played to them. This along with the completion of the Thalamocortical connections at around 26 weeks, and evidence of sensory response to pain at 30 weeks, suggest to me that the fetus develops the ability to sense and feel well before birth.

All this together means that my nuanced view is that if we have to draw a line in the sand over when abortion should and shouldn't be permissible, I would tentatively favour somewhere around 20 weeks, or the midpoint of pregnancy. I would also consider something along the lines of no restrictions in the first trimester, some restrictions in the second trimester, and a full ban in the third trimester, with exceptions for if the mother's life is in danger (in which case we save the mother because the mother is likely more sentient).

Note that in practice the vast majority of abortions happen in the first trimester, and many doctors refuse to perform late-term abortions anyway, so these kinds of restrictions would not actually change significantly the number of abortions that occur.

That was my thinking before considering the EA considerations. However, when I give thought to the moral uncertainty and the future persons arguments, I find that I am less confident in my old ideas now, so thank you for this post.

Actually, I can imagine that a manner of integrating EA considerations into my old ideas would be to weigh the value of the fetus not only by its "personhood", but also its "potential personhood given moral uncertainty".and its expected QALYs. Though perhaps the QALYs argument dominates over everything else.

Regardless, I'm impressed that you were willing to handle such a controversial topic as this.

Anti Publication Bias Registry

Just an update. I decided to make a go of adding the experiment to the Registry. Hopefully what I added is acceptable. If not, let me know what I should change.

Anti Publication Bias Registry

I have a bunch of experiments I ran for a Master's Thesis related to the use of neural networks for object recognition, that ended up getting published in a couple conference papers. Given that any A.I. research has the potential to contribute to Friendly A.I., would those have counted or are they too distant from E.A.?

I also have an experiment that's current status is failed, a Neural Network Earthquake Predictor, but which I'm considering resurrecting in the near future by applying different and newer methods. How would I go about incorporating such an experiment into this registry, given that it technically has a tentative result, but the result isn't final yet?