All of Henry Howard's Comments + Replies

Sounds very difficult when deadly drugs like fentanyl, midazolam and propofol can easily be injected through an intravenous line. You can't get an IV line on a baby in-utero, I think that's why injection into the heart is done in that case.

  1. The massive error bars around how animal well-being/suffering compares to that of humans means it's an unreliable approach to reducing suffering.

  2. Global development is a prerequisite for a lot of animal welfare work. People struggling to survive don't have time to care about the wellbeing of their food.

Aside from impossibility of quantifying fetal suffering with any certainty and the social and political intractability of this idea: potassium chloride is often directly injected into the fetal heart, not the veins, so the comparison to lethal injection or animal euthanasia might be wrong

5
Larks
10d
Would it be possible to analogously execute adults by injection into the heart if this was a more humane method?
2
Ariel Simnegar
10d
Thanks for that info! I didn't know that.

Doesn't pass the sniff test for me. Two concerns:

  1. Every vegetarian I've met or heard of is vegetarian because of either a) animal welfare, b) climate change or c) cultural tradition. It seems very unlikely that any of these factors could be strongly genetic.
  2.  They're determining genetic heritability by comparing identical twin pairs with non-identical twin pairs (i.e. if the identical twins are more similar in their preferences than non-identical twins, they assume that there's more of a genetic component). I imagine that there could be lots of confound
... (read more)

If any of these think tanks had good evidence that their strategy reliably affected economic development, the strategy would quickly be widely adopted and promoted by the thousands of economic development researchers and organisations striving to find such a strategy. Economic development is not a neglected or underfunded field.

2
freedomandutility
2mo
There is high-quality evidence supporting some of these orgs, but for the think-tank types, giving to them would be part of a more hits-based giving approach.  Also, I think many people would say that economic development in LMICs in particular is neglected and underfunded. Stefan Dercon's work (ex-chief economist of Britain's aid agency and development economics professor) challenged my previous assumption that LMIC governments are already optimising for broad-based economic growth.

Development economics is a full-fledged academic field. Very intelligent people have been working very hard on finding way to improve economic development for many years. Unlikely that outsiders on an internet forum will see neglected solutions.

Would be ecstatic to be proven wrong. In the meantime this sort of post makes the community look arrogant and out of touch.

Hi, development economist here. None of these organizations are EA organizations.

Very intelligent people have been working very hard on finding way to improve economic development for many years. Unlikely that outsiders on an internet forum will see neglected solutions.

This post is a list of projects that very intelligent people have been working very hard on for years that you could fund.

 

The error bars on the Rethink Priorities' welfare ranges are huge. They tell us very little, and making calculations based on them will tell you very little.

I think without some narrower error bars to back you up, making a post suggesting "welfare can be created more efficiently via small non-human animals" is probably net negative, because it has the negative impact of contributing to the EA community looking crazy without the positive impact of a well-supported argument.

Hi Henry! While the 90% confidence intervals for the RP welfare ranges are indeed wide, this is because they’re coming from a mixture of several theories/models of welfare. The uncertainty within a given theory/model of welfare is much lower, and you might have more or less credence in any individual model.

Additionally, if we exclude the neuron count model, the welfare ranges from the mixture of all the other models have narrower distributions.

Here’s a document that explains the different theories/models used: https://docs.google.com/document/d/1xUvMKRkEOJ... (read more)

2
Vasco Grilo
9mo
Hi Henry, To be honest, that is a quite funny meme! I have now added the 5th and 95th percentiles. Thanks for the nudge! I think the post is still beneficial, because I am not endorsing taking any specific actions to create welfare via small non-human animals. However, I think you have a good point, and I agree the post could plausibly be harmful (although my best guess is that it is beneficial!). I would only disagree with views strongly asserting that the post is harmful. PS: I upvoted your comment.

I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I don't think so, because:

a. I think it's important to hedge bets and try out a range of things in case AI is many decades away or it doesn't work out

and

b. having lots more people working on AI won't necessarily make it come faster or better (already lots of people working on it).

This seems to rest heavily on Rethink Priorities' Welfare Estimates. While their expected value for the "welfare range" of chickens is 0.332 that of humans, their 90% confidence for that number spans 0.002 to 0.869, which is so wide that we can't make much use of it.

Seems to be a tendency in EA to try to use expected values when just admitting "I have no idea" is more honest and truthful.

I mean to be fair to OP (edit: I meant original poster) they make their uncertainty really clear throughout and the conditionals it entails. I don't think it's fair to say they're not being honest and truthful.

9
Vasco Grilo
1y
Hi Henry, Thanks for engaging! Note that: So, using RP's 5th percentile welfare range instead of the median one, corporate campaigns for broiler welfare are still 10.3 (= 1.71*10^3*0.002/0.332) times as effective. However, there is also large uncertainty in how bad are the lives of broilers and human relative to their median welfare ranges. This means the true 5th percentile will tend to be lower than the 10.3 I just calculated. I guess the uncertainty stemming from the median welfare range is similar to that from the mean experience relative to the median welfare range, so I think there is less than 10 % chance that corporate campaings for broiler welfare are less effective than the lowest cost to save a life among GW's top charities. I suppose RP will look into building on their moral weight project. I am also concerned about acting as if expect values are resilient, i.e. assuming they will not easily change in the future in response to new information. On the other hand, large uncertainty in the welfare range of chickens does not necessarily imply the median welfare range lacks resilience. My understanding is that RP's research tried to integrate most of the available evidence, which means narrowing the interval of possible values may be difficult.

Most suffering in the world happens in farms.

 

You state this like it's a fact but it's heavily dependent on how you compare animal and human suffering. I don't think this is a given. Formal attempts to compare animal and human suffering like Rethink Priorities' Animal Welfare Estimates have enormous error bars.

Worthy being cautious in a world where ~10% of the world live on <$2 a day.

It kills ~350,000 people a year. The fatality rate isn't as important as the total deaths.

"Only prolongs existence"


Preventing malaria stops people from suffering from the sickness, prevents grief from the death of that person (often a child), and boosts economies by decreasing sick days and reducing the burden on health systems

2
Imma
1y
Remember that most malaria cases are not fatal.

The "terrible trifecta" of: trouble getting started, keeping focused, and finishing up projects seems universally relatable. I don't know many people who would say they don't have trouble with each of these things. Drawing this line between normal and pathological human experiences is very difficult and is why the DSM-V criteria are quite specific (and not perfect).

It might be useful to also interview people without ADHD, to differentiate pathological ADHD symptoms from normal, universal human experiences.

The risks of overdiagnosis include:

  • People can devel
... (read more)
9
lynettebye
1y
I wish we didn't need to treat ADHD like a disease, and instead people could just say "yes, I struggle more along these dimensions that the average person." Unfortunately, the medical community treats ADHD as a disease and has drawn arbitrary, frustratingly vague guidelines around it. If someone wants to access medication, they need to accept that label.  My best understanding is that ADHD symptoms are roughly normally distributed in the population. I would be thrilled if the medical community followed an informed consent model where patients could decide for themselves if they needed medication, following proper advisement of the risks and costs. Baring that, it would be great if they established clearer thresholds for what was significant enough impairment to be worth medicating, instead of the current system.  I find the DSM-V criteria aggravatingly vague and non-specific. Like "Six or more symptoms of inattention for children up to age 16 years, or five or more for adolescents age 17 years and older and adults; symptoms of inattention have been present for at least 6 months." I.e. adults who say "often" or "very often" more than 5 times on a questionnaire get diagnosed with ADHD. How often if "often"? You know, often!  
4
David Johnston
1y
I’d love to hear from people who don’t “have adhd”. I have a diagnosis myself but I have trouble believing I’m all that unusual. I tried medication for a while, but I didn’t find it that helpful with regard to the bottom line outcome of getting things done, and I felt uncomfortable with the idea of taking stimulants regularly for many years. I’d certainly benefit from being more able to finish projects, though!
9
Amber Dawn
1y
I think it is worth mentioning that overdiagnosis/incorrect self-diagnosis can have costs in the way you describe, but at the same time, when you read the stories, I think there is a difference between that and the general human condition. Like, based on the stories here I don't think I have adhd: I have trouble getting work done sometimes but my barriers seem very different to this. 

The step that's missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.

Your talk of "plans" and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I don't think the AI crowd has done enough to demonstrate how this could happen.

If you drop a naked human in amongst some wolves I don't think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I don't see how a fledgling sentient AGI on OpenAI servers can take over ... (read more)

1
Ian Turner
1y
Is your question basically how an AGI would gain power in the beginning in order to get to a point where it could execute on a plan to annihilate humans? I would argue that: * Capitalists would quite readily give the AGI all the power it wants, in order to stay competitive and drive profits. * Some number of people would deliberately help the AGI gain power just to "see what happens" or specifically to hurt humanity. Think ChaosGPT, or consider the story of David Charles Hahn. * Some number of lonely, depressed, or desperate people could be persuaded over social media to carry out actions in the real world. Considering these channels, I'd say that a sufficiently intelligent AGI with as much access to the real world as ChatGPT has now would have all the power needed to increase its power to the point of being able to annihilate humans.

I'm a doctor and I think there's a lot of underappreciated value in medicine including:

Clout: Society grants an inappropriate amount of respect to doctors, regardless of whether they're skilled or not, junior or senior. If you have a medical degree people respect you, listen to you, take you more seriously.

Hidden societal knowledge: Not many people get to see as broad a cross-section of society as you see studying medicine. You meet people at their very best and worst, you meet incredibly knowledgeable people and people that never learnt to read, people wh... (read more)

1
jackchang110
1y
Hello Ben(what if you could give 20% of your income? Would it be double more impactful) 1.Thanks for answering, there are fewer people in EA working at biology field. 2.ETG is really somethink we can conisder about, according to Toby Ord's podcasts on 80000 hours, he said talent gaps are much needed instead of funding gaps, most EA comapnies would rather get a great worker rather than getting $100000 donation,(but things like animal welfare may be different, its funding gaps are bigger). You should also consider careers like biology professor, if you're a good one, you're actually winning research funds for important topics like malaria research for EAs, too. 3.Yeah, of course medicine gives you social skills and medicine knowledge can be used more medical research I don't know medicine, but I doubt:(i)If you're working in non medical field(such as animal welfare, lab-grown meat), do you need those detailed clinical medicine knowledge?(ii)Won't working as a resaercher(bioinformatics engineer) better for building your career capital rather than being a doctor? The two areas require different experiences. 4.As my article, is medicine a more narrow subject? CS is more useful than biology, because every company needs CS employees, but maybe not biology. An medicine is only human-biology. You don't need medicine to work at AI risks or climate change. Sorry if my comment showed disrepect for someone who is expert as medicine.

I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).

You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?

I don't think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you ach... (read more)

3
alexherwix
1y
I think the point is not that it is not conceivable that progress can continue with humans still being alive but with the game theoretic dilemma that whatever we humans want to do is unlikely to be exactly what some super powerful advanced AI would want to do. And because the advanced AI does not need us or depend on us, we simply lose and get to be ingredients for whatever that advanced AI is up to. Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other. An unaligned advanced AI would not be so. A more appropriate example would be to look at the relationship between humans and insects. I don't know if you noticed but a lot of those are dying out right now because we simply don't care about or depend on them. The point with advanced AI would be that because it is potentially even more removed from us than we are from insects and also much more capable in achieving its goals that this whole competitive process which we all engage in is going to be much more competitive and faster when advanced AIs start playing in the game.  I don't want to be the bearer of bad news but I think it is not that easy to reject this analysis... it seems pretty simple and solid. I would love to know if there is some flaw in the reasoning. Would help me sleep better at night! 

"estimate... will not change much in response to new information" seems like the definition of certainty.

It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)

I disagree with the assumption that those +1000/-1000 longterm effects can be known with any certainty, no matter how many resources you spend on studying them.

The world is a chaotic system. Trying to predict where the storm will land as the butterfly flaps its wings is unreasonable. Also, some of the measures you're trying to account for (e.g. the utility of a wild animal's life) are probably not even measurable. The combination of these two difficulties makes me very dubious about the value of trying to do things like factor in long-term mosquito wellbeing to bednet effectiveness calculations, or trying to account for the far-future risks/benefits of population growth when assessing the value of vitamin supplementation.

5
Vasco Grilo
1y
Thanks for following up! I agree there will always be lots of uncertainty, even after spending tons of resources investigating the longterm effects. However, we do not need to be certain about the longterm effects. We only have to study them enough to ensure our best estimate of their expected value is resilient, i.e. that it will not change much in response to new information. If people at Open Philanthropy and Rethink Priorities spent 10 kh researching the animal and longterm effects of GiveWell's top charities, are you confident their best estimate for the expected animal and longterm effects would be negligible in comparison with the expected nearterm human effects? I am quite open to this possibility, but I do not understand how it is possible to be confident either way, given very little research has been done so far on animal and longterm effects. A butterfly flapping its wings can cause a storm, but it can just as well prevent a storm. These are cases of simple cluelessness in which there is evidential symmetry, so they are not problematic. The animal and longterm effects of saving lives are not symmetric in that way. For example, we can predict that humans work and eat, so increasing population will tend to grow the economy and food production. For intuitions that measuring wild animal welfare is not impossible, you can check research from Wild Animal Initiative (one of ACE's top charities, so they are presumably doing something valuable), and Welfare Footprint Project's research on assessing wild animal welfare.

I think attempting to account for every factor is a dead end when those factors themselves have huge uncertainty around them.

e.g.:

  • There's huge uncertainty around whether increasing human population is inherently good or bad.
  • There's huge uncertainty around when a wild animal's life is worth living.
  • There's huge uncertainty about how any given intervention now will positively or negatively affect the far future. 

I think when analyses ignore these considerations it's not because they're being lazy, it's simply an acknowledgment that it's only worth working with factors we have some certainty about, like that vitamin deficiencies and malaria are almost certainly bad

3
Vasco Grilo
1y
Thanks for engaging, Henry! Let me try to illustrate how I think about this with an example. Imagine the following: * Nearterm effects on humans are equal to 1 in expectation. * This estimate is very resilient, i.e. it will not change much in response to new evidence. * Other effects (on animals and in the longterm) are -1 k with 50 % likelihood, and 1 k with 50 % likelihood, so they are equal to 0 in expectation. * These estimates are not resilient, and, in response to new evidence, there is a 50 % chance the other effects will be negative in expectation, and 50 % chance they will be positive in expectation. * However, it is very unlikely that the other effects will in expectation be between -1 and 1, i.e. they will most likely dominate the expected nearterm effects. What do you think is a better description of our situation? * The expected overall effect is 1 (= 1 + 0) in expectation. This is positive, so the intervention is robustly good. * The overall effects is -999 (= 1 - 1 k) with 50 % likelihood, and 1,001 (= 1 + 1 k) with 50 % likelihood. This means the expected value is positive. However, given the lack of resilience of the other effects, we have little idea whether it will continue to be positive, or turn out negative in response to new evidence. So we should not act as if the intervention is robustly good. Instead, it would be good to investigate the other effects further, especially because we have not even tried any hard to do that in the past.

A couple of problems I have with this analysis:

  1. Excluding everything except the longtermist donations seems irrational. There is a lot of uncertainty around whether longtermist goals are even tractable, let alone whether the current longtermist charities are making or will make any useful progress (your link to 80,000 Hours' 18 Most Pressing Problems is broken, but their pressing areas seem to include AI safety, preventing nuclear war, preventing great power conflict, improving governance, each of which have huge question marks around them when it comes to
... (read more)
2
Vasco Grilo
1y
Hi Henry, Nice points! Fixed, thanks. I actually think the uncertainty of longtermist interventions is much larger than that of neartermist ones, in the sense that the difference between a very good and very bad outcome is larger for longtermist interventions. However, given this large uncertainty, uncovering crucial considerations is very much at the forefront of longtermist analyses, and there is often a focus on trying to ensure the expected value is positive. So I believe the uncertainty around the sign of the expected value of longtermist interventions is lower. Good point. I have added a point about this to the last bullet of the summary:

Looking at preventative health as a cost-effective global health measure is great! Haven't read this report in full but some problems stick out to me at a glance:

1. I don't think hypertension is neglected at all. Some of the world's most commonly prescribed drugs are for hypertension (Lisinopril, Amlodipine, Metoprolol are no. 3,4,5 per Google). I also don't think salt reduction is a neglected treatment: Almost every person presenting to a doctor with hypertension will be recommended to reduce their salt intake.

2. It doesn't seem very effective:

sodium inta

... (read more)
1
Joel Tan
1y
Hi Henry, (1) It's true that hypertension is less neglected in the rich world, but: (a) Even in the rich world we incur a cost from hypertension even needing to be treated in the first place (i.e. health burden given that there's always a time gap between identification and effective treatment, plus the economic burden of those drugs and general treatment support). (b) Also the blunt fact of the matter is that developing countries are poor. This has two upshots - one being that they lack the basic infrastructure to deliver drugs effectively (e.g. one expert kept emphasizing how in Africa  people in Africa have to walk great distances and wait a long while to get pills); and another is that EA funding would basically have to fund this as a permanent thing (like malaria nets), but that's counterfactually extremely costly. (2) The falls in BP have significant impact at the population level! Hence the CEA pencilling out to suggest a very cost-effective intervention. It's true of a lot of potential causes/interventions, to be fair - whereby we reduce some small risk by 0.0X% but if you have 10^Y people it can still be cost effective at scale. (3) Basically citing from the report, "a meta-analysis suggests that food can be significantly reduced in sodium without significantly affecting consumer acceptability, and as the GCAH factsheet says, "gradual (over a few months) but substantial reductions in sodium of processed foods can be made without altering the perceived taste of food", which makes sense given that our taste buds adjust to salt (and sugar) levels and get more or less sensitive accordingly." That said, I fundamentally agree that it's going to be politically difficult, far more so than other regulatory stuff like mandatory food reformulation - we see something similar for climate change, where people hate carbon taxes but are fine with quotas even though they practically end up costing consumers the same thing. Overall, this goes into the assessment that sod
1
jimrandomh
1y
It's much worse than that; in hotter climates, salt isn't a luxury, it's basic sustenance. Gandhi wasn't being figurative when he said "Next to air and water, salt is perhaps the greatest necessity of life."

Extremely cringe article.

The argument that AI will inevitably kill us has never been well-formed and he doesn't propose a good argument for it here. No-one has proposed a reasonable scenario by which immediate, unpreventable AI doom will happen (the protein nanofactories-by-mail idea underestimates the difficulty of simulating quantum effects on protein behaviour).

A human dropped into a den of lions won't immediately become its leader just because the human is more intelligent.

The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.

To me it seems like you have a wrong premise. A wellbeing focused perspective is explicitly highlighting the fact that Sentinelese and the modern Londoners may have similar levels of wellbeing. That's the point! This perspective aims to get you thinking about what is really valuable in life and what the grounds for your own beliefs about what is important are. 

You seem to have a very strong opinion that something like technological progress is intrinsically valuable. Living in a more technically advanced society is "inherently better" and, thus, every... (read more)

No it's not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think it's reasonable to discard the idea

8
Charlie_Guthmann
1y
Depending on what you mean by mistake I don’t think those implications are absurd at all. The agricultural revolution wasn’t a decision humanity made, it’s game theory. More resources, more babies, and your ideas survive. I’m not even saying that modernization was a mistake, which btw we could be less happy and i would still not necessarily say it was a mistake (again, depending on what you mean by mistake). It’s just that I think you are anthropromorphizing cultural natural selection as a well thought out decision with the intention of maximizing current utility.

I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences -> diamondoid nanobots scenario

Consider entering your post in this competition: https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize

I agree that revealed preference and survey responses can differ. Unless WELLBYs take account of revealed preferences they'll fail to predict what people actually want

"ingestion of said natural sources does not seem to include the side effects from their synthesized forms"

Can you provide a source for this?

I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.

I think shifting focus from tractable, measurable issues like global health and development to issues that - while critical - are impossible to reliably affect, might be really bad.

I don't think that a lack of concrete/legible examples of existential risk reduction so far should make us move to other cause areas. 

The main reason is that it might be unsurprising for a movement to take a while to properly get going. I haven't researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness / the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young ... (read more)

Thanks for this. It's important to give to rescue and relief efforts when disasters happen in addition to giving to development efforts in the good times so that communities are less vulnerable to disasters.

The information you've provided here is really valuable. Thank you. It will inform how I donate.

Hi Henry, thanks so much for your kindness and support, and we’re glad you’ve found this post valuable.

Recognizing the disagreements with your comment, importantly, we would like to express that we would appreciate it if this particular forum post is not used as a place to generally discuss for or against the effectiveness of disaster relief (via votes and/or comments). We would like to ask those engaging with this post to please be mindful that there may be readers directly affected by the earthquake and of the sensitivity of the subject, particularly at ... (read more)

4[anonymous]1y
Hard for me to see why this was widely disagreed with when I read it.

I don't like this post and I don't think it should pinned to the forum front page.

A few reasons:

  1. The general message of: "go and spread this message, this is the way to do it" is too self-assured, and unquestioning. It appears cultish. It's off-putting to have this as the first thing that forum visitors will see.

  2. The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it's not clear what messages you think should be being spread. The only two I could see are "relate it to Skynet" and "even if AI look

... (read more)
6
Holden Karnofsky
1y
Just noting that many of the “this concept is properly explained elsewhere” links are also accompanied by expandable boxes that you can click to expand for the gist. I do think that understanding where I’m coming from in this piece requires a bunch of background, but I’ve tried to make it as easy on readers as I could, e.g. explaining each concept in brief and providing a link if the brief explanation isn’t clear enough or doesn’t address particular objections.

Personally an argument I would find more compelling is to note that the OP doesn't answer comments, making the value of discussion lower and it less interesting for a public forum. Also there is already a newsletter for cold takes that people can subscribe to.

These don't seem very compelling to me.

  1. This argument proves too much. The same could be said of "go and donate your money, this (list of charities we think are most effective) is the way to do it".
  2. My takeaway was that messages which could be spread include: "we should worry about conflict between misaligned AI and all humans", "AIs could behave deceptively, so evidence of safety might be misleading, "AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems", "alignment research is
... (read more)
4
Lauren Maria
1y
I agree. I’m curious what the process is for deciding what gets pinned to the front page. Does anyone know?
9
freedomandutility
1y
I like the framing "bad ideas are being obscured in a tower of readings that gatekeep the critics away" and I think EA is guilty of this sometimes in other areas too.
1[comment deleted]1y
1[comment deleted]1y
2
Yellow (Daryl)
1y
Excellent.
3[anonymous]1y
Excellent.
2
ChanaMessinger
1y
I like that you were open about your gut feeling and thinking that something is cringe. I generally don't think that's a good reason to do or not do things, but it might track important things, and you fleshed yours out.
9
Un Wobbly Panda
1y
I think some statements or ideas here might be overly divisive or a little simplistic. For counterpoints, if you go look at respected communities of say, medical professionals (surgeons), as well as top athletes/military/lawyers, they effectively do all of the things you criticize. All of these communities have complex systems of beliefs, use jargon, in-groups and have imperfect levels of intellectual honesty. Often, decisions and judgement are made by senior leaders opaquely, so that a formal "expected value calculation" would be transparent in comparison. Despite all this, these groups are respected, trusted and often very effective in their domains.  Similarly, in EA, in order to make progress, authority needs to be used. EA can't avoid internal authority, as proven by EA's history and other movements. Instead, we have to do this with intention, that is, there needs to be some senior people, who are correct and virtuous, who need to be trusted. The problem is that LessWrong has in the past, monopolized implicit positions of authority, using a particular style of discourse and rhetoric, which allows it masks what it is doing. As it does this, the fact that it is a distinct entity, actively seeking resources from EA, is mostly ignored. Getting to the object level on LessWrong: what could be great is a true focus on virtue/“rationality”/self improvement. In theory, LessWrong and rationality is fantastic, and there should be more of this. The problem is that without true ability,  bad implementation and co-option occurs. For example, one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him. This is both a moral mistake and a strong smell that something is wrong. The fact this person and their (well telegraphed) issues persisted, casts doubt on the LessW

I disagree-voted.

I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.

Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-phil's opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.

It was originally EAs who used such explicit expected value calculations during Givewell periods, and I don't think I've ever seen an EV calculation don... (read more)

I strong downvoted this because I don't like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/high status, and EA already has enough of that noise.

Can  you say why you feel that longtermism suffers from less cluelessness that what you argue the GiveWell charities do? The main limitation of longtermism is that affecting the future is riddled with cluelessness.
You mention Hilary Greaves' talk, but it doesn't seem to address this. She refers to "reducing the chance of premature human extinction" but doesn't say how.

2
Vasco Grilo
1y
Hi Henry, Thanks for engaging! Assuming most of the expected value of the interventions of GiveWell's top charities is in the future (due to effects on the population size), we are cluelessness about its total cost-effectiveness. This limitation also applies to longtermist interventions.  However, if the goal is maximising longterm cost-effectiveness (because that is where most of the value is), explicitly focussing on the longterm effects will tend to be better than explicitly focussing on nearterm effects. This is informed by the heuristic that it is easier to achieve something when we are trying to achieve it. So longtermist interventions will tend to be more effective. It would also be surprising and suspicious convergence if the best interventions to save lives in the present were also the best from a longtermist perspective. The post from Alex HT I linked in the Summary has more details.

Putting a hold on helping people in poverty because of concern about insect rights is insulting to people who live in poverty and epitomises ivory-tower thinking that gets the Effective Altruism community so heavily criticised.

Saying "further research would be good" is easy because it is always true. Doing that research or waiting for it to be done is not always practical. I think you are being extremely unreasonable if, before helping someone die of malaria you ask for research to be done on:

  • the long term impacts of bednets on population growth
  • the effects
... (read more)
3
Vasco Grilo
1y
Thanks for commenting, Henry. I do feel you are pointing to something valuable. FWIW, I am confused about the implications of my analysis too. Somewhat relatedly, I liked this post from Michelle Hutchinson.

I have a general disdain for criticizing arguments as ivory-tower thinking without engaging with the content itself. I think it is an ineffective way of communication which leaves room for quite a lot of non-central fallacy. The same ivory tower thinkings you identified were also quite important at promoting moral progress with careful reflections. I don't think considering animals as deserving moral attention is naturally an insulting position. Perhaps a better way of approaching this question will be to actually consider whether or not this trade-off is ... (read more)

IPA and J-PAL are underrated. They've had a hand in producing the evidence for many of GiveWell's recommendations. They seem to be significantly better at cause discovery than the Effective Altruism community.

The use of expected value doesn't seem useful here. Your confidence intervals are huge (95% confidence interval for pig suffering capacity relative to humans is between 0.005 to 1.031). Because the implications are so different across that spectrum (varying from basically "make the cages even smaller, who cares" at 0.005 to "I will push my nan down the stairs to save a pig" at 1.031) it really doesn't feel like I can draw any conclusions from this.

Fair enough, Henry. We have limited faith in the models too. But as we said:

  1. The numbers are placeholders.
  2. Our actual views are summarized in the key takeaways and again toward the end (e.g., within an order of magnitude of humans for vertebrates--0.1 or above--which certainly does make a practical difference).
  3. This work builds on everything else we've done and is not, all on its own, the complete case for relatively animal-friendly welfare range estimates.

Re. 3, I prefer giving now. I think there's a logic to giving later in that money can accrue interest and you can set yourself up to donate more later, but doing good accrues its own interest: helping someone out of poverty today is better than helping them 10 years from now, as it gives them an extra 10 years of better life and 10 years to pay it forward to their community.

A few things that stand out to me that seem dodgy and make me doubt this analysis:

One of the studies you included with the strongest effect (Araya et al. 2003 in Chile with an effect of 0.9 Cohens d) uses antidepressants as part of the intervention. Why did you include this? How many other studies included non-psychotherapy interventions?

Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups  to psychother... (read more)

Hi Henry, 

I addressed the variance in the primacy of psychotherapy in the studies in response to Nick's comment, so I'll respond to your other issues. 

Some of the studies deal with quite specific groups of people eg. survivors of violence, pregnant women, HIV-affected women with young children. Generalising from psychotherapy's effects in these groups  to psychotherapy in the general population seems unreasonable. 

I agree this would be a problem if we only had evidence from one quite specific group. But when we have evidence from multip... (read more)

6
Barry Grimes
1y
Hi Henry. Thanks for your feedback! I'll let Joel respond to the substantive comments but just wanted to note that I've changed the "Appendix D" references to "Appendix C". Thanks very much for letting us know about that.  I'm not sure why Appendix B has hyperlinks for some studies but not for others. I'll check with Joel about that and add links to all the papers as soon as I can. In future, I plan to convert some of our data tables into embedded AirTables so that readers can reorder by different columns if they wish.

Most of these seem intractable and many have lots of people working on them already.

The benefit of bed nets and vitamin A supplementation is that they are proven solutions to neglected problems.

3
freedomandutility
1y
Agree that it would be difficult to generate comparably high certainty evidence for most of these cause areas. However, I think interventions in these areas could still have high expected value and perform well on the ITN framework so could still be worth pursuing, the way pandemic preparedness, AI safety and broader approaches and political approaches to international development and farmed animal welfare are pursued by EA at the moment. Interested to hear which of these causes you feel are not neglected at the moment. I’d say you’re probably right for 19, 20 and 25.

"Subjecting countless animals to a lifetime of suffering" probably describe the life of the average bird in the amazon (struggling to find food, shelter, avoid predators, protect its children) or the average fish/shrimp in the ocean.

If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth

1
Darren_Tindall
1y
Animals already exist on earth independently of humans. The difference with introducing life on Mars is that humans would take the decision to make the decision and expend the resources to do so.

If you argue that introducing animals to other planets will cause net suffering then it seems to follow that we should eliminate natural ecosystems here on earth

Do you intend this as an endorsement, a reductio ad absurdum, or a neutral statement?

I personally strongly suspect that many (most?) wild animals alive on Earth today live lives of net suffering. Even so, there are a bunch of reasons not to try to "eliminate natural ecosystems" right now, including instrumental reliance on those ecosystems, avoidance of drastic & irreversible action before we u... (read more)

1
Vgvt
1y
That would have negative consequences for the people that already exist today and rely on earths biosphere, the same can not be said for these frivolous space colonization ventures

I'm not sure eliminate is the right way to put it. Reducing net primary productivity (NPP) in legally acceptable ways (e.g. converting lawns into gravel) could end up being cost-effective, but eliminate seems too strong here.

Doing NPP reduction in less acceptable ways could make a lot of people angry, which seems bad for advocacy to reduce wild animal suffering. As Brian Tomasik pointed out somewhere, most of expected future wild animal suffering wouldn't take place on Earth, so getting societal support to prevent terraforming seems more important.

If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.

I think this was a terrible idea

I think you've overestimated the value of a dedicated conference centre. The important ideas in EA so far haven't come from conversations over tea and scones at conference centres but are either common sense ("do the most good", "the future matters") or have come from dedicated field trials and RCTs.

I also think you've underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.

I'm confident that this will be remembered as one of CEA's worst decisions.

9
Danny Donabedi
1y
It’s sad you’re getting downvoted. A manor and 25 acres of nothingness adds nearly nothing to EA when some other space, for instance the hall of a large parish or church, even abandoned ones, could have been (on an as needed basis) rented out / purchased instead— for a fraction of the cost — when conferences or workshops are needed. Imagine the extent of scrutiny the manor's purchase would face in early EA. It wouldn’t be pretty.

Strong agree. I think the EA community far overestimates its ability to predictably affect the future, particularly the far future.

Opportunities that development economists have missed?

The general ideas that Hauke suggests in the appendix are things like liberalisation, freeing trade, more open migration. They're ideas that have been fiercely studied and debated before. Organisations like the World Trade Organisation and The World Bank are built around these ideas. The difficulty in testing and implementing these ideas is part of what drove the rise of the randomistas.

I think the "~4 person-years" idea is delusional and arrogant.

1
Mo Putera
1y
Donation opportunities, yes. I'm not sure if donation opportunities in particular are something development economists look for; I'm not familiar with the literature.  I broadly agree with the substance of your comment, I just admittedly find the tone off-puttingly abrasive ("delusional and arrogant" doesn't seem charitable), so I'll respectfully bow out of this exchange. 
3
DavidNash
1y
I think the tricky part is finding where smaller donors can donate, similar to GiveWell. Those organisations have suggestions for large sums of money but there is a gap for advice for individuals that want to give to global development and are okay with it not just being RCT evidence.

This is very inspiring. I think you're making an incredibly positive impact on the world, not just through charity but also by inspiring those around you. Brilliant!

Really cool idea. I'll be watching eagerly

Good portrait of the problem. The solution isn't obvious to me.

I'm very skeptical of the suggestions from the Halstead and Hillebrandt post. It seems unlikely that a "~4 person-year research effort" could discover the key to economic growth in developing countries when the entire field of development economics has been trying to solve this problem for decades.

3
Mo Putera
1y
Halstead and Hillebrandt didn't claim that a 4-person year research effort could discover the key to economic growth. Their claim is simply about finding good donation opportunities: My sense is such an effort might start from Hillebrandt's appendices, in particular appendix 4. The output of such an effort might look like one of Founders Pledge's reports (example; Halstead is a coauthor). 

I agree with the general premise of earning to give through entrepreneurship.

I've never been very convinced by the talent-constraint concept. With the right wage you can hire talent. I think the push from earning to give has been a mistake.

Great!

I think that the allocation of government aid doesn't get enough attention from effective altruists. Government aid budgets are an enormous pool of money and often don't seem to be spent in an evidence-based way. Huge potential for positive change here.

It seems like every now and again someone suggests cardiovascular disease as a potential high-impact cause area on the EA forum. The problem is tractability. It's really hard to convince people to eat better, exercise more and stop smoking. Doctors spend a lot of time trying to do this and billions have been spent on public health campaigns trying to convince people to do this. The medications that treat cholesterol, hypertension, and diabetes are among the most commonly prescribed in the world already.

You've identified a serious problem but I don't see a cost-effective solution

3
Eli_
1y
I agree that lifestyle changes are hard to do, but I would like to push back in two ways: * Currently, it isn't standard practice to measure apoB and base treatment on that statistic. This would result in doctors prescribing medication to people that are at risk, but that are currently unaware. * Furthermore, there is the option of early treatment with medication, that currently isn't deployed. Like I described in the article, you can have a lifetime risk of 39-70% while also having a 10-year risk <10%, which means you won't get treatment. While it is seems plausible that these people would benefit a lot by this. To summarize it bluntly, it seems that the world would benefit from prescribing more cholesterol lowering medication. Advocacy for doing this would be the cost-effective solution. Having said that, I didn't start writing this article while having EA in mind. So I haven't done an intensive cost/benefit analysis. 
Load more