Lukas_Finnveden

Comments

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future" is incorrect. The fact that the repugnant conclusion concerns is about merely possible people clearly matters for people's intuition in some way.

I agree that the repugnace can't be grounded by saying that merely possible people don't matter at all. But there are other possible mechanics that treat merely possible people differently from existing people, that can ground the repugnance. For example, the paper that we're discussing under!

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, most people who oppose the repugnant conclusion instead favor egalitarian solutions, granting small benefits to many (though I haven't seen any data on this, so I'd be curious if you disagree!). Whereas when debating who to bring into existence, people who oppose the repugnant conclusion aren't just indifferent about what happens to these merely-possible people; they actively think that the happy, tiny population is better. 

So the tricky thing is that people intuitively support granting small benefits to many already existing people above large benefits to a few already existing people, but don't want to extend this to creating many barely-good lives above creating a few really good ones.

The Fermi Paradox has not been dissolved

with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen

I haven't run the numbers, but I wouldn't be quite so dismissive. Intergalactic travel is probably possible, so with numbers as high as these, I would've expected us to encounter some early civilisation from another galaxy. So if these numbers were right, it'd be some evidence that intergalactic travel is impossible, or that something else strange is going on.

(Also, it would be an important consideration for whether we'll encounter aliens in the future, which has at least some cause prio implications.)

(But also, I don't buy the argument for these numbers, see my other comment.)

The Fermi Paradox has not been dissolved

I hadn't seen the Lineweaver and Davis paper before, thanks for pointing it out! I'm sceptical of the methodology, though. They start out with a uniform prior between 0 and 1 of the probability that life emerges in a ~0.5B year time window. This is pretty much assuming their conclusion already, as it assigns <0.1% probability to life emerging with less than 0.1% probability (I much prefer log-uniform priors). The exact timing of abiogenesis is then used to get a very modest bayesian update (less than 2:1 in favor of "life always happens as soon as possible" vs any other probability of life emerging) which yields the 95% credible interval with 13% at the bottom. Note that even before they updated on any evidence, they had already assumed a 95% credible interval with 2.5% at the bottom!

As an aside, I do mostly agree that alien life is likely to be common outside our galaxy (or at least that we should assume that it is). However, this is because I'm sympathetic to another account of anthropics, which leads to large numbers of aliens almost regardless of our prior, as I explain here.

Thoughts on whether we're living at the most influential time in history

I actually think the negative exponential gives too little weight to later people, because I'm not certain that late people can't be influential. But if I had a person from the first 1e-89 of all people who've ever lived and a random person from the middle, I'd certainly say that the former was more likely to be one of the most influential people. They'd also be more likely to be one of the least influential people! Their position is just so special!

Maybe my prior would be like 30% to a uniform function, 40% to negative exponentials of various slopes, and 30% to other functions (e.g. the last person who ever lived seems more likely to be the most influential than a random person in the middle.)

Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential.

Thoughts on whether we're living at the most influential time in history

One way to frame this is that we do need extraordinarily strong evidence to update from thinking that we're almost certainly not the most influential time to thinking that we might plausibly be the most influential time. However, we don't  need extraordinarily strong evidence pointing towards us almost certainly being the most influential (that then "averages out" to thinking that we're plausibly the most influential). It's sufficient to get extraordinarily strong evidence that we are at a point in history which is plausibly the most influential. And if we condition on the future being long and that we aren't in a simulation (because that's probably when we have the most impact), we do in fact have extraordinarily strong evidence that we are very early in history, which is a point that's plausibly the most influential.

Thoughts on whether we're living at the most influential time in history

I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness.

Let the set H="the 1e10 (i.e. 10 billion) most influential people who will ever live"  and let E="the 1e11 (i.e. 100 billion) earliest people who will ever live". Assume that the future will contain 1e100 people. Let X be a randomly sampled person.

For our unconditional prior P(X in H), everyone agrees that uniform probability is appropriate, i.e., P(X in H) = 1e-90. (I.e. we're not giving up on the self-sampling assumption.)

However, for our belief over P(X in H | X in E), i.e. the probability that a randomly chosen early person is one of the most influential people, some people argue we should utilise an e.g. exponential function where earlier people are more likely to be influential (which could be called a prior over "X in H" based on how early X is). However, it seems like you're saying that we shouldn't assess P(X in H | X in E) directly from such a prior, but instead get it from bayesian updates. So lets do that.

P(X in H | X in E) = P(X in E | X in H) * P(X in H) / P(X in E) = P(X in E | X in H) * 1e-90 / 1e-89 = P(X in E | X in H) * 1e-1 = P(X in E | X in H) / 10

So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect more or less influentialness.

Also, the way that 1e-90 and 1e-89 are both extraordinarily unlikely, but divide out to becoming 1e-1, illustrates Buck's point:

if you condition on us being at an early time in human history (which is an extremely strong condition, because it has incredibly low prior probability), it’s not that surprising for us to find ourselves at a hingey time.

Getting money out of politics and into charity

Another relevant post is Paul Christiano's Repledge++, which suggests some nice variations. (It might still be worth going with something simple to ease communication, but it seems good to consider options and be aware of concerns.)

As one potential problem with the basic idea, it notes that

I'm not donating to politics, so wouldn't use it.

isn't necessarily true, because if you thought that your money would be matched with high probability, you could remove money from the other campaign at no cost to your favorite charity. This is bad, because it gives people on the other side less incentive to donate to the scheme, because they might just match people who otherwise wouldn't have donated to campaigns.

Getting money out of politics and into charity

We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

Both links go to the same felicifia page. I suspect you're referring to the moral trade paper: http://www.amirrorclear.net/files/moral-trade.pdf

How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna?

Givewell estimates that they directed or influenced about 161 million dollars in 2018. 64 million came from Good Ventures grants. Good Ventures is the philanthropic foundation founded and funded by Dustin and Cari. It seems like the 161 million directed by Give Well represents a comfortable majority of total 'EA' donation.

If you want to count OpenPhil's donations as EA donations, that majority isn't so comfortable. In 2018, OpenPhil recommended a bit less than 120 million (excluding Good Venture's donations to GiveWell charities) of which almost all came from Good Ventures, and they recommended more in both 2017 and 2019. This is a great source on OpenPhil's funding.

Load More