peterhartree

3256 karmaJoined Dec 2014Working (6-15 years)Reykjavik, Islande
twitter.com/peterhartree

Bio

Now: TYPE III AUDIO; Independent study.

Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. Read my CV.

Also: Inbox When Ready; Radio Bostrom; The Valmy; Comment Helper for Google Docs.

Comments
240

Topic contributions
4

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".

Have you visited the 80,000 Hours website recently?

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.

A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).

(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)

As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.

I’m glad you shared the J.S. Mill quote.

…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).

To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.

My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.

(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)

The completion rate at BlueDot Impact averaged out at about 75%

How do you define completion?

I think so. I'll put a note about this at the top of the post.

Perhaps it’s just the case that the process of moral reflection tends to cause convergence among minds from a range of starting points, via something like social logic plus shared evolutionary underpinnings.

Yes. And there are many cases where evolution has indeed converged on solutions to other problems[1].

  1. ^

    Some examples:

    (Copy-pasted from Claude 3 Opus. They pass my eyeball fact-check.)

    1. Wings: Birds, bats, and insects have all independently evolved wings for flight, despite having very different ancestry.
    2. Eyes: Complex camera-like eyes have evolved independently in vertebrates (like humans) and cephalopods (like octopuses and squids).
    3. Echolocation: Both bats and toothed whales (like dolphins) have evolved the ability to use echolocation for navigation and hunting, despite being unrelated mammals.
    4. Venomous spines: Both porcupines (mammals) and hedgehogs (also mammals, but not closely related to porcupines) have evolved sharp, defensive spines.
    5. Fins: Sharks (cartilaginous fish) and dolphins (mammals) have independently evolved similar fin shapes and placement for efficient swimming.
    6. Succulence: Cacti (native to the Americas) and euphorbs (native to Africa) have independently evolved similar water-storing, fleshy stems to adapt to arid environments.
    7. Flippers: Penguins (birds), seals, and sea lions (mammals) have all evolved flipper-like limbs for swimming, despite having different ancestries.
    8. Ant-eating adaptations: Anteaters (mammals), pangolins (mammals), and numbats (marsupials) have independently evolved long snouts, sticky tongues, and strong claws for eating ants and termites.

My own attraction to a bucket approach (in the sense of (1) above) is motivated by a combination of:

(a) reject the demand for commensurability across buckets.

(b) make a bet on plausible deontic constraints e.g. duty to prioritise members of the community of which you are a part.

(c) avoid impractical zig-zagging when best guess assumptions change.

Insofar as I'm more into philosophical pragmatism than foundationalism, I'm more inclined to see a messy collection of reasons like these as philosophically adequate.

I think there are two things to justify here:

  1. The commitment to a GHW bucket, where that commitment involves "we want to allocate roughly X% of our resources to this".

  2. The particular interventions we fund within the GHW resource bucket.

I think the justification for (1) is going to look very different to the justification for (2).

I'm not sure which one you're addressing, it sounds like more (2) than (1).

Would you be up for spelling out the problem of "lacks adequate philosophical foundations"?

What criteria need to be satisfied for the foundations to be adequate, to your mind?

Do they e.g. include consequentialism and a strong form of impartiality?

I hesitate to post things like this, because “short, practical advice” posts aren't something I often see on the Forum.

I'm not sure if this is the kind of thing that's worth encouraging as a top-level post.

In general I would like to read more posts like this from EA Forum users, but perhaps not as part of the front page.

Thanks for this. I'd be keen to see a longer list of the interesting for-profits in this space.

Biobot Analytics (wastewater monitoring) are the only for-profit on the 80,000 Hours job board list.

Load more