Thanks for this write up!
You might already be aware of these, but I think there are some strong objections to the doomsday argument that you didn't touch on in your post.
One is the Adam & Eve paradox, which seems to follow from the same logic as the Doomsday argument, but also seems completely absurd.
Another is reference class dependence. You say it is reasonable for me to conclude I am in the middle of 'humanity', but what is humanity? Why should I consider myself a sample from all homo sapiens, and not say, apes, or mammals? or earth-originating life? What even is a 'human'?
Makes sense, thank you for the reply, I appreciate it!
And good to know you still want to hear from people who don't meet that threshold of involvement, I wasn't sure if that was the case or not from the wording of the post and survey questions. I will fill it in now!
Are you wanting non "actively involved" EAs to complete the survey?
The definition of "active involvement" is given as working >5 hours per week in at least one EA cause area, and it reads like the $40 is only donated for people in that category? Suggesting maybe these are the only people you want to hear from?
This seems quite strict! I've taken the GWWC pledge and I give all my income to EA causes above a cap. I also volunteer with the humane league, probably spending a few hours a month on average doing stuff with them. And I check the EA forum pretty ...
You've made some interesting points here, but I don't think you ever discussed the possibility that someone is actually voting altruistically, for the benefit of some group or cause they care about (either helping people in their local area, people in the rest of the country, everyone in the world, future generations, etc).
Is it really true that most voters' behavior can be explained by either (i) self-interest, or (ii) an 'emotionally rewarding cheer for their team'..? I find that a depressing thought. Is no one sincerely trying to do the right thing?
If y...
This is an interesting analysis that I haven't properly digested, so what I'm about to say might be missing something important, but something feels a bit strange about this type of approach to this type of question.
For example, couldn't I write a post titled "Can AI cause human extinction? not on priors" where I look at historical data on "humans killed by machines" (e.g. traffic accidents, factory accidents) as a fraction of the global population, show that it is tiny, and argue it's extremely unlikely that AI (another type of machine) will wipe us all o...
I don't know much about the Nestle example, but in principle yes I think so.
I think the same would apply to any case where the production of each individual product does marginal harm. In that case a single individual can choose not to purchase the product and therefore have a marginal impact.
And maybe these kind of boycotts are more common than I suggested in the original answer, but it definitely applies to veganism.
This is just a quick answer to point out that veganism (which you mention in the question) is a bit different to other kinds of boycotts.
In a conventional boycott, you refuse to purchase certain kinds of products until the organisation(s) who sell them change their ways. I don't know much about how effective those kind of boycotts tend to be (although I think there are some famous examples of where large scale well organised boycotts seem to have produced some powerful results, e.g. montgomery bus boycott).
But veganism isn't just about pressuring organisat...
Thanks for this, a really nice write up. I like these heuristics, and will try to apply them.
On the intuition behind how to interpret statistical power, doesn't a bayesian perspective help here?
If someone was conducting a statistical test to decide between two possibilities, and you knew nothing about their results except: (i) their calculated statistical power was B (ii) the statistical significance threshold they adopted was p and (iii) that they ultimately reported a positive result using that threshold, then how should you update on that, without knowi...
I have a personal anecdote that I can use as a knock down argument against anyone on the EA forum who tells me that I am wasting my time by reading the news: I discovered EA through an article on the BBC News website.
I first heard about Effective Altruism from this article, which I read in December 2010, before the concept was even called "Effective Altruism". The article was a profile of Toby Ord, and his decision to give away most of his lifetime income to effective causes. It made a big impact on me at the time, because I knew that I wanted to do the sa...
Thanks for this great post! Really fascinating!
Sorry if this was already asked, but I couldn't see it: how likely is it that pathogens would be able to develop resistance to UVC, and how quickly might that happen? If it did happen, how big a concern would it be? E.g. would it just be a return to the status quo, or would it be an overcorrection?
I really like the way Derek Parfit distinguishes between consequentialist and non-cosequentialist theories in 'Reasons and Persons'.
All moral theories give people aims. A consequentialist theory gives everyone the same aims (e.g. maximize total happiness). A non-consequentialist theory gives different people different aims (e.g. look after your own family).
There is a real important difference there. Not all moral theories are consequentialist.
Thanks for writing this up, this is a really interesting idea.
Personally, I find points 4, 5, and 6 really unconvincing. Are there any stronger arguments for these, that don't consist of pointing to a weird example and then appealing to the intuition that "it would be weird if this thing was conscious"?
Because to me, my intuition tells me that all these examples would be conscious. This means I find the arguments unconvincing, but also hard to argue against!
But overall I get that given the uncertainty around what consciousness is, it might be a good idea to use implementation considerations to hedge our bets. This is a nice post.
I think this is an interesting question, and I don't know the answer.
I think two quite distinct ideas are being conflated in your post though: (i) 'earning to give' and (ii) the GWWC 10% pledge.
These concepts are very different in my head.
'Earning to give': When choosing a career with the aim of doing good, some people should pick a career to maximize their income (perhaps subject to some ethical constraints), and then give a lot of it away to effective causes (probably a lot more than 10%). This idea tells you which jobs you should decide to work in.
GWWC ...
Thanks for this reply! That makes sense. Do you know how likely people in the field think it is that AGI will come from just scaling up LLMs vs requiring some big new conceptual breakthrough? I hear people talk about this question but don't have much sense about what the consensus is among the people most concerned about AI safety (if there is a consensus).
I've seen people already building AI 'agents' using GPT. One crucial component seems to be giving it a scratchpad to have an internal monologue with itself, rather than forcing it to immediately give you an answer.
If the path to agent-like AI ends up emerging from this kind of approach, wouldn't that make AI safety really easy? We can just read their minds and check what their intentions are?
Holden Karnofsky talks about 'digital neuroscience' being a promising approach to AI safety, where we figure out how to read the minds of AI agents. And for curr...
I really like this argument. I think there's another way of framing it that occurred to me when reading it, that I also found insightful (though it may already be obvious):
Point taken, although I think this is analogous to saying: Counterfactual analysis will not leave us predictably worse off if we get the probabilities of others deciding to contribute right.
Thank you for this correction, I think you're right! I had misunderstood how to apply Shapley values here, and I appreciate you taking the time to work through this in detail.
If I understand correctly now, the right way to apply Shapley values to this problem (with X=8, Y=2) is not to work with N (the number of players who end up contributing, which is unknown), but instead to work with N', the number of 'live' players who could contribute (known with certainty here, not something you can select), and then:
Edit: Vasco Grilo has pointed out a mistake in the final paragraph of this comment (see thread below), as I had misunderstood how to apply Shapley values, although I think the conclusion is not affected.
If the value of success is X, and the cost of each group pursuing the intervention is Y, then ideally we would want to pick N (the number of groups that will pursue the intervention) from the possible values 0,1,2 or 3, so as to maximize:
(1-(1/2)^N) X - N Y
i.e., to maximize expected value.
If all 3 groups have the same goals, they'll all agree what N is. If ...
To arrive at the 12.5% value, you were assuming that you knew with certainty that the other two teams will try to create the vaccine without you (and that they each have a 50% chance of succeeding). And I still think that under that assumption, 12.5% is the correct figure.
If I understand your reasoning correctly for why you think this is incoherent, it's because:
If the 3 teams independently arrive at the 12.5% figure, and each use that to decide whether to proceed, then you might end up in a situation where none of them fund it, despite it being clearly wo...
Edited after more careful reading of the post
As you say in the post, I think all these things can be true:
1) The expected counterfactual value is all that matters (i.e. we can ignore Shapley values).
2) The 3 vaccine programs had zero counterfactual value in hindsight.
3) It was still the correct decision to work on each of them at the time, with the information that was available then.
At the time, none of the 3 programs knew that any of the others would succeed, so the expected value of each programme was very high. It's not clear to me why the '12.5%...
I should admit at this point that I didn't actually watch the Philosophy Tube video, so can't comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn't need to rely on the possibility of 'Bostromian' futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don't need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relev...
On your response to the Pascal's mugging objection, I've seen your argument made before about Pascal's mugging and strong longtermism (that existential risk is actually very high so we're not in a Pascal mugging situation at all) but I think that reply misses the point a bit.
When people worry about the strong longtermist argument taking the form of a Pascal mugging, the small probability they are thinking about is not the probability of extinction, it is the probability that the future is enormous.
The controversial question here is: how bad would extinctio...
When you write:
"I decide what the probability of the Mugger's threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn't have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren't people better off if I give that money to charity after all?"
This is exactly the 'dogmatic' response to the mugge...
I still don't think the position I'm trying to defend is circular. I'll have a go at explaining why.
I'll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. I'd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayes' theorem. Savage's axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Ba...
I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn't even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.
But you haven't really avoided the problem, just re-phrased it slightly. Whatever the amount of money you would be willing to risk for others, then on expected utility terms, it seems better to give it to the mugger, than to an excellent charity, such as the Against Malaria Foundation. In this framing of the problem, the mugger is now effectively robbing the AMF, rather than you, but the problem is still there.
I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).
But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions....
I guess I am calling into question the use of subjective probabilities to quantify beliefs.
I think subjective probabilities make sense in the context of decisions, to describe your decision-making behaviour (see e.g. Savage's derivation of probabilities from certain properties of decision-making he thinks we should abide by). But if you take the decisions out of picture, and try to talk about 'beliefs' in abstract, and try to get me to assign a real number between 0 and 1 to them, I think I am entitled to ask "why would I want to do something like that?" E...
Lets assume for the moment that the probabilities involved are known with certainty. If I understand your original 'way out' correctly, then it would apply just as well in this case. You would embrace being irrational and still refuse to give the mugger your wallet. But I think here, the recommendations of expected utility theory in a Pascal's mugger situation are doing well 'on their own terms'. This is because expected utility theory doesn't tell you to maximize the probability of increasing your utility, it tells you to maximize your utility in expectat...
I think that's a very persuasive way to make the case against assigning 0 probability to infinities. I think I've got maybe three things I'd say in response which could address the problem you've raised:
You've pointed to a lot of potential complications, which I agree with, but I think they all also apply in cases where someone has done harm, not just in cases where they have not helped.
I just don't think the act/ommission distinction is very relevant here, and I thought the main claim of your post was that it was (but could have got the wrong end of the stick here!)
If we know the probabilities with certainty somehow (because God tells us, or whatever) then dogmatism doesn't help us avoid reckless conclusions. But it's an explanation for how we can avoid most reckless conclusions in practice (it's why I used the word 'loophole', rather than 'flaw'). So if someone comes up and utters the Pascal's mugger line to you on the street in the real world, or maybe if someone makes an argument for very strong longtermism, you could reject it on dogmatic grounds.
On your point about diminishing returns to utility preventing reckl...
I think this comes down to the question of what subjective probabilities actually are. If something is concievable, do we have to give it a probability greater than 0? This post is basically asking, why should we?
The main reason I'm comfortable adapting my priors to be dogmatic is that I think there is probably not a purely epistemological 'correct' prior anyway (essentially because of the problem of induction), and the best we can do is pick priors that might help us to make practical decisions.
I'm not sure subjective probabilities can necessarily be give...
Thanks for your comment, these are good points!
First, I think there is an important difference between Pascal's mugger, and Kavka's poison/Newcomb's paradox. The latter two are examples of ways in which a theory of rationality might be indirectly self-defeating. That means: if we try to achive the aims given to us by the theory, they can sometimes be worse achieved than if we had followed a different theory instead. This means there is a sense in which the theory is failing on its own terms. It's troubling when theories of rationality or ethics have this p...
The two specific examples that come to mind where I've seen dogmatism discussed and rejected (or at least not enthusiastically endorsed) are these:
The first is not actually a paper, and to be fair I think Hajek ends up being pretty sympathetic to the view that in practice, maybe we do just have to be dogmatic. But my impression was it was ...
I agree with some of what you say here. For example, from a mental health perspective, teaching yourself to be content 'regardless of your achievements' sounds like a good thing.
But I think adopting 'minimize harm' as the only principle we can use to make judgements of people, is far too simplistic a principle to work in practice.
For example, if I find out that someone watched a child fall into a shallow pond, and didn't go to help them (when the pond is shallow enough that that would have posed no risk to them), then I will judge them for that. I am not c...
Apologies, I misunderstood a fundamental aspect of what you're doing! For some reason in my head you'd picked a set of conjectures which had just been posited this year, and were seeing how Laplace's rule of succession would perform when using it to extrapolate forward with no historical input.
I don't know where I got this wrong impression from, because you state very clearly what you're doing in the first sentence of your post. I should have read it more carefully before making the bold claims in my last comment. I actually even had a go at stating the te...
Edit: This comment is wrong and I'm now very embarrassed by it. It was based on a misunderstanding of what the NunoSempere is doing that would have been resolved by a more careful read of the first sentence of the forum post!
Thank you for the link to the timeless version, that is nice!
But I don't agree with your argument that this issue is moot in practice. I think you should repeat your R analysis with months instead of years, and see how your predicted percentiles change. I predict they will all be precisely 12 times smaller (willing to bet a small...
I'm confused about the methodology here. Laplace's law of succession seems dimensionless. How do you get something with units of 'years' out of it? Couldn't you just as easily have looked at the probability of the conjecture being proven on a given day, or month, or martian year, and come up with a different distribution?
I'm also confused about what this experiment will tell us about the utility of Laplace's law outside of the realm of mathematical conjectures. If you used the same logic to estimate human life expectancy, for example, it would clearly be v...
Thanks for the comment! I have quite a few thoughts on that:
First, the intention of this post was to criticize strong longtermism by showing that it has some seemingly ridiculous implications. So in that sense, I completely agree that the sentence you picked out has some weird edge cases. That's exactly the claim I wanted to make! I also want to claim that you can't reject these weird edge cases without also rejecting the core logic of strong longtermism that tells us to give enormous priority to longterm considerations.
The second thing to say though is th...
Thanks! Very related. Is there somewhere in the comments that describes precisely the same issue? If so I'll link it in the text.
I tried to describe some possible examples in the post. Maybe strong longtermists should have less trust in scientific consensus, since they should act as if the scientific consensus is wrong on some fundamental issues (e.g. on the 2nd law of thermodynamics, faster than light travel prohibition). Although I think you could make a good argument that this goes too far.
I think the example about humanity's ability to coordinate might be more decision-relevant. If you need to act as if humanity will be able to overcome global challenges and spread through the g...
This seems like an odd post to me. Your headline argument is that you think SBF made an honest mistake, rather than wilfully misusing his users' funds, and most commenters seem to be reacting to that claim. The claim seems likely wrong to me, but if you honestly believe it then I'm glad you're sharing it and that it's getting discussed.
But in your third point (and maybe your second?) you seem to be defending the idea that even if SBF wilfully misused funds, then that's still ok. It was a bad bet, but we should celebrate people who take risky, but pos...
I am very confident that the arguments do perfectly cancel out in the sky-colour case. There is nothing philosophically confusing about the sky-colour case, it's just an application of conditional probability.
That doesn't mean we can never learn anything. It just means that if X and Y are independent after controlling for a third variable Z, then learning X can give you no additional information about Y if you already know Z. That's true in general. Here X is the colour of the sky, Y is the probability of a catastrophic event occurring, and Z is the number...
I'd like to spend more time digesting this properly, but the statistics in this paragraph seem particularly shocking to me:
"For instance, Hickel et al. (2022) calculate that, each year, the Global North extracts from the South enough money to end extreme poverty 70x over. The monetary value extracted from the Global South from 1990 to 2015 - in terms of embodied labour value and material resources - outstripped aid given to the Global South by a factor of 30. "
They also seem hard to reconcile with each other. If the global north extracts every year 70...
I 'disagreed' with this, because I don't think you drew enough of a distinction between purchasing animals raised on factory farms, and purchasing meat in general.
While there might be an argument that the occasional cheeseburger isn't that "big of a deal", I think purchasing a single chicken raised on a factory farm is quite a big deal. And if you do that occasionally, stopping doing that will probably be pretty high up on the list of effective actions you can take, in terms of impact to effort ratio.