The massive error bars around how animal well-being/suffering compares to that of humans means it's an unreliable approach to reducing suffering.
Global development is a prerequisite for a lot of animal welfare work. People struggling to survive don't have time to care about the wellbeing of their food.
Aside from impossibility of quantifying fetal suffering with any certainty and the social and political intractability of this idea: potassium chloride is often directly injected into the fetal heart, not the veins, so the comparison to lethal injection or animal euthanasia might be wrong
Doesn't pass the sniff test for me. Two concerns:
If any of these think tanks had good evidence that their strategy reliably affected economic development, the strategy would quickly be widely adopted and promoted by the thousands of economic development researchers and organisations striving to find such a strategy. Economic development is not a neglected or underfunded field.
Development economics is a full-fledged academic field. Very intelligent people have been working very hard on finding way to improve economic development for many years. Unlikely that outsiders on an internet forum will see neglected solutions.
Would be ecstatic to be proven wrong. In the meantime this sort of post makes the community look arrogant and out of touch.
Very intelligent people have been working very hard on finding way to improve economic development for many years. Unlikely that outsiders on an internet forum will see neglected solutions.
This post is a list of projects that very intelligent people have been working very hard on for years that you could fund.
The error bars on the Rethink Priorities' welfare ranges are huge. They tell us very little, and making calculations based on them will tell you very little.
I think without some narrower error bars to back you up, making a post suggesting "welfare can be created more efficiently via small non-human animals" is probably net negative, because it has the negative impact of contributing to the EA community looking crazy without the positive impact of a well-supported argument.
Hi Henry! While the 90% confidence intervals for the RP welfare ranges are indeed wide, this is because they’re coming from a mixture of several theories/models of welfare. The uncertainty within a given theory/model of welfare is much lower, and you might have more or less credence in any individual model.
Additionally, if we exclude the neuron count model, the welfare ranges from the mixture of all the other models have narrower distributions.
Here’s a document that explains the different theories/models used: https://docs.google.com/document/d/1xUvMKRkEOJ...
I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I don't think so, because:
a. I think it's important to hedge bets and try out a range of things in case AI is many decades away or it doesn't work out
and
b. having lots more people working on AI won't necessarily make it come faster or better (already lots of people working on it).
This seems to rest heavily on Rethink Priorities' Welfare Estimates. While their expected value for the "welfare range" of chickens is 0.332 that of humans, their 90% confidence for that number spans 0.002 to 0.869, which is so wide that we can't make much use of it.
Seems to be a tendency in EA to try to use expected values when just admitting "I have no idea" is more honest and truthful.
I mean to be fair to OP (edit: I meant original poster) they make their uncertainty really clear throughout and the conditionals it entails. I don't think it's fair to say they're not being honest and truthful.
Most suffering in the world happens in farms.
You state this like it's a fact but it's heavily dependent on how you compare animal and human suffering. I don't think this is a given. Formal attempts to compare animal and human suffering like Rethink Priorities' Animal Welfare Estimates have enormous error bars.
Worthy being cautious in a world where ~10% of the world live on <$2 a day.
"Only prolongs existence"
Preventing malaria stops people from suffering from the sickness, prevents grief from the death of that person (often a child), and boosts economies by decreasing sick days and reducing the burden on health systems
The "terrible trifecta" of: trouble getting started, keeping focused, and finishing up projects seems universally relatable. I don't know many people who would say they don't have trouble with each of these things. Drawing this line between normal and pathological human experiences is very difficult and is why the DSM-V criteria are quite specific (and not perfect).
It might be useful to also interview people without ADHD, to differentiate pathological ADHD symptoms from normal, universal human experiences.
The risks of overdiagnosis include:
The step that's missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.
Your talk of "plans" and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I don't think the AI crowd has done enough to demonstrate how this could happen.
If you drop a naked human in amongst some wolves I don't think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I don't see how a fledgling sentient AGI on OpenAI servers can take over ...
I'm a doctor and I think there's a lot of underappreciated value in medicine including:
Clout: Society grants an inappropriate amount of respect to doctors, regardless of whether they're skilled or not, junior or senior. If you have a medical degree people respect you, listen to you, take you more seriously.
Hidden societal knowledge: Not many people get to see as broad a cross-section of society as you see studying medicine. You meet people at their very best and worst, you meet incredibly knowledgeable people and people that never learnt to read, people wh...
I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).
You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?
I don't think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you ach...
"estimate... will not change much in response to new information" seems like the definition of certainty.
It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)
I disagree with the assumption that those +1000/-1000 longterm effects can be known with any certainty, no matter how many resources you spend on studying them.
The world is a chaotic system. Trying to predict where the storm will land as the butterfly flaps its wings is unreasonable. Also, some of the measures you're trying to account for (e.g. the utility of a wild animal's life) are probably not even measurable. The combination of these two difficulties makes me very dubious about the value of trying to do things like factor in long-term mosquito wellbeing to bednet effectiveness calculations, or trying to account for the far-future risks/benefits of population growth when assessing the value of vitamin supplementation.
I think attempting to account for every factor is a dead end when those factors themselves have huge uncertainty around them.
e.g.:
I think when analyses ignore these considerations it's not because they're being lazy, it's simply an acknowledgment that it's only worth working with factors we have some certainty about, like that vitamin deficiencies and malaria are almost certainly bad
A couple of problems I have with this analysis:
Looking at preventative health as a cost-effective global health measure is great! Haven't read this report in full but some problems stick out to me at a glance:
1. I don't think hypertension is neglected at all. Some of the world's most commonly prescribed drugs are for hypertension (Lisinopril, Amlodipine, Metoprolol are no. 3,4,5 per Google). I also don't think salt reduction is a neglected treatment: Almost every person presenting to a doctor with hypertension will be recommended to reduce their salt intake.
2. It doesn't seem very effective:
...sodium inta
Extremely cringe article.
The argument that AI will inevitably kill us has never been well-formed and he doesn't propose a good argument for it here. No-one has proposed a reasonable scenario by which immediate, unpreventable AI doom will happen (the protein nanofactories-by-mail idea underestimates the difficulty of simulating quantum effects on protein behaviour).
A human dropped into a den of lions won't immediately become its leader just because the human is more intelligent.
The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.
To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are.
You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, every...
No it's not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think it's reasonable to discard the idea
I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences -> diamondoid nanobots scenario
Consider entering your post in this competition: https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize
I agree that revealed preference and survey responses can differ. Unless WELLBYs take account of revealed preferences they'll fail to predict what people actually want
"ingestion of said natural sources does not seem to include the side effects from their synthesized forms"
Can you provide a source for this?
I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues that - while critical - are impossible to reliably affect, might be really bad.
I don't think that a lack of concrete/legible examples of existential risk reduction so far should make us move to other cause areas.
The main reason is that it might be unsurprising for a movement to take a while to properly get going. I haven't researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness / the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young ...
Thanks for this. It's important to give to rescue and relief efforts when disasters happen in addition to giving to development efforts in the good times so that communities are less vulnerable to disasters.
The information you've provided here is really valuable. Thank you. It will inform how I donate.
Hi Henry, thanks so much for your kindness and support, and we’re glad you’ve found this post valuable.
Recognizing the disagreements with your comment, importantly, we would like to express that we would appreciate it if this particular forum post is not used as a place to generally discuss for or against the effectiveness of disaster relief (via votes and/or comments). We would like to ask those engaging with this post to please be mindful that there may be readers directly affected by the earthquake and of the sensitivity of the subject, particularly at ...
I don't like this post and I don't think it should pinned to the forum front page.
A few reasons:
The general message of: "go and spread this message, this is the way to do it" is too self-assured, and unquestioning. It appears cultish. It's off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it's not clear what messages you think should be being spread. The only two I could see are "relate it to Skynet" and "even if AI look
Personally an argument I would find more compelling is to note that the OP doesn't answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.
These don't seem very compelling to me.
I disagree-voted.
I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.
Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-phil's opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.
It was originally EAs who used such explicit expected value calculations during Givewell periods, and I don't think I've ever seen an EV calculation don...
I strong downvoted this because I don't like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/high status, and EA already has enough of that noise.
Can you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves' talk, but it doesn't seem to address this. She refers to "reducing the chance of premature human extinction" but doesn't say how.
Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.
Saying "further research would be good" is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:
I have a general disdain for criticizing arguments as ivory-tower thinking without engaging with the content itself. I think it is an ineffective way of communication which leaves room for quite a lot of non-central fallacy. The same ivory tower thinkings you identified were also quite important at promoting moral progress with careful reflections. I don't think considering animals as deserving moral attention is naturally an insulting position. Perhaps a better way of approaching this question will be to actually consider whether or not this trade-off is ...
IPA and J-PAL are underrated. They've had a hand in producing the evidence for many of GiveWell's recommendations. They seem to be significantly better at cause discovery than the Effective Altruism community.
The use of expected value doesn't seem useful here. Your confidence intervals are huge (95% confidence interval for pig suffering capacity relative to humans is between 0.005 to 1.031). Because the implications are so different across that spectrum (varying from basically "make the cages even smaller, who cares" at 0.005 to "I will push my nan down the stairs to save a pig" at 1.031) it really doesn't feel like I can draw any conclusions from this.
Fair enough, Henry. We have limited faith in the models too. But as we said:
Re. 3, I prefer giving now. I think there's a logic to giving later in that money can accrue interest and you can set yourself up to donate more later, but doing good accrues its own interest: helping someone out of poverty today is better than helping them 10 years from now, as it gives them an extra 10 years of better life and 10 years to pay it forward to their community.
A few things that stand out to me that seem dodgy and make me doubt this analysis:
One of the studies you included with the strongest effect (Araya et al. 2003 in Chile with an effect of 0.9 Cohens d) uses antidepressants as part of the intervention. Why did you include this? How many other studies included non-psychotherapy interventions?
Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups to psychother...
Hi Henry,
I addressed the variance in the primacy of psychotherapy in the studies in response to Nick's comment, so I'll respond to your other issues.
Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups to psychotherapy in the general population seems unreasonable.
I agree this would be a problem if we only had evidence from one quite specific group. But when we have evidence from multip...
Most of these seem intractable and many have lots of people working on them already.
The benefit of bed nets and vitamin A supplementation is that they are proven solutions to neglected problems.
"Subjecting countless animals to a lifetime of suffering" probably describe the life of the average bird in the amazon (struggling to find food, shelter, avoid predators, protect its children) or the average fish/shrimp in the ocean.
If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth
If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth
Do you intend this as an endorsement, a reductio ad absurdum, or a neutral statement?
I personally strongly suspect that many (most?) wild animals alive on Earth today live lives of net suffering. Even so, there are a bunch of reasons not to try to "eliminate natural ecosystems" right now, including instrumental reliance on those ecosystems, avoidance of drastic & irreversible action before we u...
I'm not sure eliminate is the right way to put it. Reducing net primary productivity (NPP) in legally acceptable ways (e.g. converting lawns into gravel) could end up being cost-effective, but eliminate seems too strong here.
Doing NPP reduction in less acceptable ways could make a lot of people angry, which seems bad for advocacy to reduce wild animal suffering. As Brian Tomasik pointed out somewhere, most of expected future wild animal suffering wouldn't take place on Earth, so getting societal support to prevent terraforming seems more important.
If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.
I think this was a terrible idea
I think you've overestimated the value of a dedicated conference centre. The important ideas in EA so far haven't come from conversations over tea and scones at conference centres but are either common sense ("do the most good", "the future matters") or have come from dedicated field trials and RCTs.
I also think you've underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.
I'm confident that this will be remembered as one of CEA's worst decisions.
Strong agree. I think the EA community far overestimates its ability to predictably affect the future, particularly the far future.
Opportunities that development economists have missed?
The general ideas that Hauke suggests in the appendix are things like liberalisation, freeing trade, more open migration. They're ideas that have been fiercely studied and debated before. Organisations like the World Trade Organisation and The World Bank are built around these ideas. The difficulty in testing and implementing these ideas is part of what drove the rise of the randomistas.
I think the "~4 person-years" idea is delusional and arrogant.
This is very inspiring. I think you're making an incredibly positive impact on the world, not just through charity but also by inspiring those around you. Brilliant!
Good portrait of the problem. The solution isn't obvious to me.
I'm very skeptical of the suggestions from the Halstead and Hillebrandt post. It seems unlikely that a "~4 person-year research effort" could discover the key to economic growth in developing countries when the entire field of development economics has been trying to solve this problem for decades.
I agree with the general premise of earning to give through entrepreneurship.
I've never been very convinced by the talent-constraint concept. With the right wage you can hire talent. I think the push from earning to give has been a mistake.
Great!
I think that the allocation of government aid doesn't get enough attention from effective altruists. Government aid budgets are an enormous pool of money and often don't seem to be spent in an evidence-based way. Huge potential for positive change here.
It seems like every now and again someone suggests cardiovascular disease as a potential high-impact cause area on the EA forum. The problem is tractability. It's really hard to convince people to eat better, exercise more and stop smoking. Doctors spend a lot of time trying to do this and billions have been spent on public health campaigns trying to convince people to do this. The medications that treat cholesterol, hypertension, and diabetes are among the most commonly prescribed in the world already.
You've identified a serious problem but I don't see a cost-effective solution
Sounds very difficult when deadly drugs like fentanyl, midazolam and propofol can easily be injected through an intravenous line. You can't get an IV line on a baby in-utero, I think that's why injection into the heart is done in that case.