Henry Howard🔸

1494 karmaJoined Melbourne VIC, Australia
henryach.com

Bio

Strong advocate of just having a normal job and give to effective charities.

Doctor in Australia giving 10% forever

Comments
244

(NOTE: Coming at this from a place of: a. ignorance of what the AI Safety community actually does and b. not wanting to take the ego hit of admitting that I have been wrong about my long-held skepticism of AI Safety)

I think it was and is fair to be skeptical of the shift to AI Safety in EA on the basis that it's not that tractable, and that there's there's not clear evidence that the AI Safety movement has had a positive effect on the trajectory of AI.

"But it brought the ideas into the mainstream"

I think the AI Safety community will be tempted to think they've normalised in the zeitgeist ideas about superintelligent AIs and the philosphical questions and risks that arise from them, but 2001: A Space Odyssey came out in 1968, Terminator in 1984 and The Matrix in 1999 etc.. The ideas of superintelligant AIs and the existential risks of them are diffused through modern culture and it's possible that The Pope and The UN would have made the same statements about them given the recent progress of LLMs regardless of the AI Safety movement.

Are there many ideas in If Anyone Builds It, Everyone Dies that weren't broadly covered in Terminator/The Matrix/2001 a Space Odyssey/Dune etc.?

"But the work they've done has set us on the right path"

I haven't seen strong evidence for the direct work of the AI Safety movement reducing existential risks from AI:

  • Amanda Askell's involvement with shaping the character of Claude sounds good. Has it made much difference or is it just putting a nice and brittle mask on the beast?
  • AI Safety organisations like MIRI an Redwood Research have been operating for 25 and 5 years respectively. As an outsider I coudn't point to any particular breakthrough they've made in AI alignment. Redwood seems to do some kinda interesting work on measuring rogue behaviour and creating checks. I dunno. Seems like any organisation trying to make a reliable AI product would be heavily incentivised to do this stuff regardless.
  • In Australia Good Ancestors has probably contributed in some way to the government's decision to potentially open an AI Safety Institute here. The statements the government puts out about them seem to mostly emphasise deepfake porn and the threat to people's jobs rather than existential risks, which makes me think that this decision might have just happened anyway regardless of the AI Safety movement.
  • Interpretability research seems far from being able to understand more than a few components at a time. And also the companies making AI would likely have been incentivised to do this work regardless of the AI Safety movement because customers don't want a black box.

     

From the outside it seems there's a good argument that the AI situation would have evolved pretty similarly regardless of EA/AI Safety input.

From that position, it's easy to believe that if EA had just stuck to Earning To Give and malaria nets and decaging chickens then the impact would have been greater, both directly and because the movement might not have lost as much momentum when AI Safety alienated people.

I agree that the depth of the evidence conversations doesn't lend itself to amateur discussion on the forum and I also feel like there's not much I have to add to the GHD discussions here because of that.

Don't think it's fair to say it's not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.

“direct altruistic focus strategically so as to be of positive utility”

Vague and evasive. Say what you mean. If you want to keep poor people poor until some new technology comes out, you should say that. If you don’t think further development will ever be justified, you should say that (so that your contention can be discarded as absurd and impractical)

“From the sumatriptan RCT: 3% were pain-free at 10 minutes after placebo.”

This is an irrational comparison. You’re comparing your best case scenario anecdote to the results of an RCT.

It’s possible that one of those 3% of people would have an anecdote for sumatriptan as convincing as yours: causing rapid resolution of their headache. That anecdote would not be representative.

I’m not saying you’re wrong about psychedelics and cluster headache. I desperately hope you’re right and there is an easy fix. Anecdote leads people astray constantly and we have to have a high suspicion of it.

“The effect size is incredible and the percentage of people for whom it's effective for is very large” - What’s the source for this?

Impressive anectodes, but we see a lot of those in medicine. Trial or it didn’t happen.

Because development has been the human project for the last 10,000 years and if we accept that it has been and continues to be a mistake then the conclusion is... what? anarcho-primitivism/regressing to pre-industrial hunter-gather life/Return to Monke. That doesn't seem very practical.

Fair, I really mean pessimism rather than nihilism. On what basis can you reject philosophical pessimism - a self-consistent and valid belief that is seemingly impossible to prove/disprove - other than that it is just not pragmatic or constructive at all.

None of that suggested work seems very clarifying

The welfare ranges are extremely broad for the animals they do cover, and that's with questionable assumptions. I don't see how extending these to microbes would clarify anything.

Doing "more research" on the day-to-day experience of nematodes and how they respond to noxious stimuli or calculating their neural energy consumption as a proxy for their ability to suffer also doesn't seem clarifying. Imagine you knew all this information about nematodes. Still the fundamental question will remain how their "suffering" or "joy" compares to ours and how morally important it is. A lot of animal ethics is driven by our ability to relate to animals ("I can relate somewhat to a chicken and I wouldn't want to be a chicken in a cage") but this falls apart by the time we get to nematodes, so you have to rely solely on your numbers, which will be extremely uncertain.

I remain very puzzled how you ever see us getting low enough error bars on the joy/suffering of microscopic worms that we could make decision based on it.

How would you get the "Further human economic development" "necessary to build the knowledge and resources" to build a better world without supporting the development of developing countries?

Are you talking a top-heavy approach where we keep poor countries poor until fake/cultured meat is cheap enough to supplant farmed animals?

Load more