Henry Howard


I'm begging you to just get a normal job and give to effective charities.

Doctor in Australia giving 10% forever



The error bars on the Rethink Priorities' welfare ranges are huge. They tell us very little, and making calculations based on them will tell you very little.

I think without some narrower error bars to back you up, making a post suggesting "welfare can be created more efficiently via small non-human animals" is probably net negative, because it has the negative impact of contributing to the EA community looking crazy without the positive impact of a well-supported argument.

I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I don't think so, because:

a. I think it's important to hedge bets and try out a range of things in case AI is many decades away or it doesn't work out


b. having lots more people working on AI won't necessarily make it come faster or better (already lots of people working on it).

This seems to rest heavily on Rethink Priorities' Welfare Estimates. While their expected value for the "welfare range" of chickens is 0.332 that of humans, their 90% confidence for that number spans 0.002 to 0.869, which is so wide that we can't make much use of it.

Seems to be a tendency in EA to try to use expected values when just admitting "I have no idea" is more honest and truthful.

Most suffering in the world happens in farms.


You state this like it's a fact but it's heavily dependent on how you compare animal and human suffering. I don't think this is a given. Formal attempts to compare animal and human suffering like Rethink Priorities' Animal Welfare Estimates have enormous error bars.

Worthy being cautious in a world where ~10% of the world live on <$2 a day.

It kills ~350,000 people a year. The fatality rate isn't as important as the total deaths.

"Only prolongs existence"

Preventing malaria stops people from suffering from the sickness, prevents grief from the death of that person (often a child), and boosts economies by decreasing sick days and reducing the burden on health systems

The "terrible trifecta" of: trouble getting started, keeping focused, and finishing up projects seems universally relatable. I don't know many people who would say they don't have trouble with each of these things. Drawing this line between normal and pathological human experiences is very difficult and is why the DSM-V criteria are quite specific (and not perfect).

It might be useful to also interview people without ADHD, to differentiate pathological ADHD symptoms from normal, universal human experiences.

The risks of overdiagnosis include:

  • People can develop unhealthy cognitive patterns around seeing themselves as having a "disease" when they're actually just struggling with the standard human condition
  • They might receive harmful interventions that they don't need
  • It adds unnecessary burden to health systems.

The step that's missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.

Your talk of "plans" and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I don't think the AI crowd has done enough to demonstrate how this could happen.

If you drop a naked human in amongst some wolves I don't think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I don't see how a fledgling sentient AGI on OpenAI servers can take over enough infrastructure that it poses a serious threat. I've not seen a convincing theory for how this would happen. Mailorder nanobots seem unrealistic (too hard to simulate the quantum effects in protein chemistry), the AI talking itself out of its box is another suggestion that seems far-fetched (main evidence seems to be some chat games that Yudkowsky played a few times?), a gradual takeover by its voluntary uptake into more an more of our lives seems slow enough to stop.

I'm a doctor and I think there's a lot of underappreciated value in medicine including:

Clout: Society grants an inappropriate amount of respect to doctors, regardless of whether they're skilled or not, junior or senior. If you have a medical degree people respect you, listen to you, take you more seriously.

Hidden societal knowledge: Not many people get to see as broad a cross-section of society as you see studying medicine. You meet people at their very best and worst, you meet incredibly knowledgeable people and people that never learnt to read, people who have lived incredible lives and people who have been through trauma that you couldn't imagine. You gain an understanding of how broad the spectrum of human experience is. It's humbling and grounding.

Social skills: Medicine is a crash course on how not to be cripplingly socially awkward (not everyone passes with flying colours). You become better at relating to people, making them feel comfortable, talking about difficult topics, navigating conflict. These are all highly transferable skills.

Latent medical knowledge: There's a real freedom in being comfortable knowing when and when not to go to the hospital. Some people go to the Emergency Department every time they have a stomach ache, just in case. Learning medicine means you have a general idea about what problems are actually worth worrying about.

Job security: You can be pretty sure you'll always have a job no matter what (until GPT-6 arrives, but that applies to anything).

Opens doors: Studying med doesn't mean you need to be a doctor. You can use the insider knowledge of the medical field in med tech (not many doctors can code, useful combo), or to work in medical research (make some malaria vaccines) or global health.


I don't feel like my work as a doctor is directly very impactful (I mostly do hospital paperwork). But I gave 50% of my income in my first year and I'm giving 10% of my income since. In this way you can have a lot of positive impact.

I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).

You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?

I don't think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you achieve something requiring so much research, so many resources and so much coordination without enslaving huge swathes of society and killing anyone that gets in the way? And wouldn't the knowledge to do these things cause terrible new dangers?

Luckily the peasant is wrong: the path here has led up a slope of gradually increasing quality of life (some disagree).

Load more