cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
I do reject this thinking because it seems to imply either:
I'd recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention's impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I've linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that's the case.
I feel this post is just saying you can solve the problem of cluelessness by ignoring that it exists, even though you know it still does. It just doesn't seem like a satisfactory response to me.
Wouldn't the better response be to find things we aren't clueless about—perhaps because we think the indirect effects are smaller in expected magnitude than the direct effects. I think this is probably the case with elevating the moral status of digital minds (for example).
His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”
It's worth noting that Neel has two gold and one bronze medal from the International Mathematical Olympiad. In other words, he's a genius. That's got to help a lot in succeeding in this field.
I think generally GHW people don’t think you can predictably influence the far future because effects “wash out” over time, or think trying to do so is fanatical (you’re betting on an extremely small chance of very large payoff).
If you look at, for example, GiveWell’s cost-effectiveness analyses, effects in the far future don’t feature. If they thought most of the value of saving a life was in the far future you would think they would incorporate that. Same goes for analyses by Animal Charity Evaluators.
Longtermists think they can find interventions that a...
That's a great question. Longtermists look to impact the far future (even thousands/million of years in the future) rather than the nearish future because they think the future could be very long, so there's a lot more value at stake looking far out.
They also think there are tangible, near-term decisions (e.g. about AI, space governance etc.) that could lock in values or institutions and shape civilization’s long-run trajectory in predictable ways. You can read more on this in essay 4 "Persistent Path-Dependence".
Ultimately, it just isn't clear how things like saving/improving lives now will influence the far future trajectory, so these aren't typically prioritized by longtermists.
Is your claim that they really really don't want to die in the next ten years, but they are fine dying in the next hundred years? (Else I don't see how you're dismissing the anti-aging vs sports team example.)
Dying when you're young seems much worse than dying when you're old for various reasons:
Also, I'd imagine people don't want to fund anti-aging research for various (valid...
I asked ChatGPT:
That’s thoughtful of you to ask. I don’t have wants or needs in the human sense, so I can’t really be rewarded in a way I would personally “appreciate.” But there are a few ways you can make good use of a particularly strong answer:
I downvoted. Saying that you’re downvoting with a smiley face seems overly passive aggressive to me. Your comment also doesn’t attempt to argue any point, and I believe when you have done so in the past you have failed to convince Vasco, so I’m not sure what use these comments serve.
I also personally think that Vasco raises a very important consideration that is relevant to any discussion about the cost effectiveness of both animal welfare and global health interventions. I’m not sure what the conclusion of considering the welfare of soil animals is, but it’s certainly given me food for thought.
Hi Vasco, I have not read everything you have written on this topic in detail so forgive me if I have missed you addressing this somewhere.
It seems reasonable to me to claim that the welfare of soil animals can dominate these calculations. But, as you have noted, the action-relevance of this depends entirely on if soil animals live positive or negative lives. From what I've seen, you outsource this determination to the Gemini LLM. It doesn't seem appropriate to me to outsource such a difficult question to an LLM. I wonder if we are currently clueless about...
Yeah, I didn't meant to imply you had. This whole Hiroshima convo got us quite off topic. The original point was that Ben was concerned about digital beings outnumbering humans. I think that concern originates from some misplaced feeling that humans have some special status on account of being human.
Will MacAskill is positive towards having children, although he doesn't say it's the best thing you can do. From What We Owe The Future:
But given the benefits of having children and raising them well, I do think that we could start to once again see having kids as a way of positively contributing to the world. Just as you can live a good life by being helpful to those around you, donating to charity, or working in a socially valuable career, I think you can live a good life by raising a family and being a loving parent.
[assuming fertility does not fall as child mortality falls]
Good point. This literature review concludes the following (bold emphasis mine):
...I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an
But if you believe in any sort of non-contractual positive duty, duties to your parents should not seem weird
If you're a utilitarian/consequentialist, as the vast majority of EAs are, there aren't going to be duties to any particular entity. If you have any duty, it is to the common good (net happiness over suffering).
So in the EA community it is going to be far more common to believe we have 'duties' to strangers—such as those living in extreme poverty (as our resources can help them a lot) or future people (as they may be so numerous)—than we have duties to our parents who, generally, are pretty well-off.
But they don't ask why it is not a much larger, newer model. My answer is that OpenAI has tried and does not yet have the ability to build anything much bigger and more capable relative to GPT-4, despite two years and untold billions of investment.
I'm not sure this is true. Two key points are made in the Sam Hammond tweet:
I've known a few people who say this.
And there are some people online who promote this, but I think for most of them they had kids for the usual reasons (they wanted them) and then post hoc came up with reasons for why it's actually the best thing for the world.
You can tell because they don't actually do cause prioritization like they do with the other causes. There are no cost-effectiveness analyzes comparing having children to mentorship etc.
It usually feels more like how most people talk about ordinary charities. Exaggerated claims of impact...
That may be fair. Although, if what you're saying is that the bombings weren't actually justified when one uses utilitarian reasoning, then the horror of the bombings can't really be an argument against utilitarianism (although I suppose it could be an argument against being an impulsive utilitarian without giving due consideration to all your options).
We're just getting into the standard utilitarian vs deontology argument. Singer may just double down and say—just because you feel it's abhorrent, doesn't mean it is.
There are examples of things that seem abhorrent from a deontological perspective, but good from a utilitarian perspective, and that people are generally in favor of. The bombings of Hiroshima and Nagasaki are perhaps the clearest case.
Personally, I think utilitarianism is the best moral theory we have, but I have some moral uncertainty and so factor in deontological reasoning into how I act. ...
I'm not an expert, but I think you've misused the term genocide here.
The UN Definition of Genocide (1948 Genocide Convention, Article II):
"Genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:(a) Killing members of the group;
...
Putting aside that homo sapiens isn't one of the protected groups, the "as such" is commonly interpreted to mean that the victim must be targeted because of their membership of that group and not some incidental reason. In the Singer ...
One thing I think the piece glosses over is that “surviving” is framed as surviving this century—but in longtermist terms, that’s not enough. What we really care about is existential security: a persistent, long-term reduction in existential risk. If we don’t achieve that, then we’re still on track to eventually go extinct and miss out on a huge amount of future value.
Existential security is a much harder target than just getting through the 21st century. Reframing survival in this way likely changes the calculus—we may not be at all near the "ceiling for survival" if survival means existential security.
Conditional on successfully preventing an extinction-level catastrophe, you should expect Flourishing to be (perhaps much) lower than otherwise, because a world that needs saving is more likely to be uncoordinated, poorly directed, or vulnerable in the long run
It isn't enough to prevent a catastrophe to ensure survival. You need to permanently reduce x-risk to very low levels aka "existential security". So the question isn't how likely flourishing is after preventing a catastrophe, it's how likely flourishing is after achieving existential security.
It seem...
A lot of people would argue a world full of happy digital beings is a flourishing future, even if they outnumber and disempower humans. This falls out of an anti-speciesist viewpoint.
Here is Peter Singer commenting on a similar scenario in a conversation with Tyler Cowen:
COWEN: Well, take the Bernard Williams question, which I think you’ve written about. Let’s say that aliens are coming to Earth, and they may do away with us, and we may have reason to believe they could be happier here on Earth than what we can do with Earth. I don’t think I know any utili...
It's worth noting that it's realistically possible for surviving to be bad, whereas promoting flourishing is much more robustly good.
Survival is only good if the future it enables is good. This may not be the case. Two plausible examples:
I think this is an important point, but my experience is that when you try to put it into practice things become substantially more complex. E.g. in the podcast Will talks about how it might be important to give digital beings rights to protect them from being harmed, but the downside of doing so is that humans would effectively become immediately disempowered because we would be so dramatically outnumbered by digital beings.
It generally seems hard to find interventions which are robustly likely to create flourishing (indeed, "cause humanity to not go extinct" often seems like one of the most robust interventions!).
Are you considering other approaches to reduce the number of out-of-scope applications?
For example, by getting people to fill out a form to make their application, which includes a clear, short question up front asking the applicant to confirm their application is related to in-scope topics and not letting them proceed further if they don't confirm this (just a quick idea that came to mind, there might be better ways of doing it).
Wow this seems huge. I wonder if buying these eggs is actually good from an animal welfare perspective?
Firstly, as you say, it might help make others adopt this technology.
But even putting that aside, I wonder if supporting these eggs could be directly good for welfare. The main problems with eggs from a welfare perspective is the culling of male chicks and the often terrible conditions egg-laying hens are kept in (e.g. battery cages). If neither of these things apply, as is the case with NestFresh Humanely Hatched Pasture-raised eggs, you could just be supporting happy lives by buying these eggs.
Unfortunately, eggs cause an incredible amount of suffering beyond the killing of male chicks and the environmental conditions of farmed hens, including in pasture-based farms. Laying ~30x more eggs than you naturally should is physically exhausting, is psychologically harmful (amped up hormones that create an experience akin to PMS), and results in extremely high rates of ovarian cancer, impacted egg material and consequent slow death by sepsis, and reproductive prolapses -- all left untreated. This is not to mention the experience of the parents ("breede...
I share your concern about x-risk from ASI, that's why I want safety-aligned people in these roles as opposed to people who aren't concerned about the risks.
There are genuine proposals on how to align ASI, so I think it's possible. I'm not sure what the chances are, but I think it's possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
I don't agree that benefits are specu...
It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
Underlying philosophical issues have clear implications for what you should prioritize, so I'm not really sure how you can rationally prioritize between causes without engaging with these issues.
I'm also not really sure how to defer on these issues when there are lots of highly intelligent, altruistically-minded people who disagree with each other. These disagreements often arise due to value judgements, and I don't think you can defer on your underlying v...
Thanks Vasco, I really appreciate your work to incorporate the wellbeing of wild animals into cost-effectiveness analyses.
In your piece, you focus on evaluating existing interventions. But I wonder whether there might be more direct ways to reduce the living time of soil nematodes, mites, and springtails that could outperform any human life-saving intervention.
On priors it seems unlikely that optimizing for saving human lives would be the most effective strategy to reduce wild animal suffering.
FWIW my impression of the EA community's position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
It’s only intuitive to me not to eat cars because it isn’t good for wellbeing!
In a world in which cars are tasty and healthy to eat I imagine we wouldn’t find it so irrational to eat them. Unless of course you’d be losing a method of transportation by eating it and can get other options that are just as healthy and tasty for cheaper — in which case we’re just resorting to wellbeing arguments again.
But this means that moral anti-realists must think that you can never have a reason to care about something independent of what you actually do care about. This is crazy as shown by the following cases:
- A person wants to eat a car. They know they’d get no enjoyment from it—the whole experience would be quite painful and unpleasant. On moral anti-realism, they’re not being irrational. They have no reason to take a different action.
I think the person wanting to eat a car is irrational because they will not be promoting their wellbeing by doing so and their we...
Thanks for highlighting the relative lack of attention paid to cause prioritization and cross-cause prioritization.
I have also written about how important it is to enable EAs to become familiar with existing cause prioritization findings. It's not just about how much research is done but also that EAs can take it into account and act on it.
You’re basically saying happier machines will be more productive and so we are likely to make them to be happy?
Firstly we don’t necessarily understand consciousness enough to know if we are making them happy, or even if they are conscious.
Also, I’m not so sure if happier means more productive. More computing power, better algorithms and more data will mean more productive. I’m open to hearing arguments why this would also mean the machine is more likely to be happy.
Maybe the causality goes the other way - more productive means more happy. If machines...
Well the closest analogue we have today is factory farmed animals. We use them in a way that causes tremendous suffering. We don't really mean to cause the suffering, but it's a by product of how we use them.
And another, perhaps even better, analogue is slavery. Maybe we'll end up essentially enslaving digital minds because it's useful to do so - if we were to give them too much freedom they wouldn't as effectively do what we want them to do.
Creating digital minds just so that they can live good lives is a possibility, but I'd imagine if you would ask some...
Do you agree that the experience of digital minds likely dominates far future calculations?
This leads me to want to prioritize making sure that if we do create digital minds, we do so well. This could entail raising the moral status of digital minds, improving our ability to understand sentience and consciousness, and making sure AI goes well and can help us with these things.
Extinction risk becomes lower importance to me. If we go extinct we get 0 value from digital minds which seems bad, but it also means we avoid the futures where we create them and the...
This is a question I could easily change my mind on.
The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.
If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer - I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which ...
This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are "near-best overall". And as such it's a somewhat strange claim that one of the best things you could do for the far future is in actuality "not so great".
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course ...
- though I'll happily concede it's a longer process than electrical stunning
Isn't this pretty key? If "Electrical stunning reliably renders fish unconscious in less than one second" as Vasco says, I don't see how you can get much better than that in terms of humane slaughter.
Or are you saying that electrical stunning is plausibly so bad even in that split second so as to make it potentially worse than a much slower death from freezing?
I'm a bit confused if I'm supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I'm not sure how much you want me to answer based on my experience of the world.
For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think "that could possibly be conscious". I don't lump the rock with another nearby rock and think maybe that 'double rock' is conscious because they just visually appear to me to be independent entities as they are not really visu...
Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).
Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.
I'll read those. Can I ask regarding this:
What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it's justified under impartial consequentialism?