All of peterhartree's Comments + Replies

On short stories, two notable examples: Narrative Ark by Richard Ngo and the regular "Tech Tales" section at the end of Jack Clark's Import AI newsletter.

I read the paper, then asked Claude 3 to summarise. I endorse the following summary as accurate:

The key argument in this paper is that Buddhism is not necessarily committed to pessimistic views about the value of unawakened human lives. Specifically, the author argues against two possible pessimistic Buddhist positions:

  1. The Pessimistic Assumption - the view that any mental state characterized by dukkha (dissatisfaction, unease, suffering) is on balance bad. 
  2. The Pessimistic Conclusion - the view that over the course of an unawakened life, dukkha will al
... (read more)
1
Calvin_Baker
21d
Wow, this is good - go Claude 3! 

Thanks for this. 

I take the central claim to be:

Even if an experience contains some element of dukkha, it can still be good overall if its positive features outweigh the negative ones. The mere presence of dukkha does not make an experience bad on balance.

I agree with this, and also agree that it's often overlooked.

The completion rate at BlueDot Impact averaged out at about 75%

How do you define completion?

1
Jamie B
1mo
What I had in mind was "shows up to all 8 discussion groups for the taught part of the course". I also didn't check this figure, so that was from memory. True, there are lots of ways to define it (e.g. finishing the readings, completing the project, etc)

I think so. I'll put a note about this at the top of the post.

Perhaps it’s just the case that the process of moral reflection tends to cause convergence among minds from a range of starting points, via something like social logic plus shared evolutionary underpinnings.

Yes. And there are many cases where evolution has indeed converged on solutions to other problems[1].

  1. ^

    Some examples:

    (Copy-pasted from Claude 3 Opus. They pass my eyeball fact-check.)

    1. Wings: Birds, bats, and insects have all independently evolved wings for flight, despite having very different ancestry.
    2. Eyes: Complex camera-like eyes have evolved inde
... (read more)

My own attraction to a bucket approach (in the sense of (1) above) is motivated by a combination of:

(a) reject the demand for commensurability across buckets.

(b) make a bet on plausible deontic constraints e.g. duty to prioritise members of the community of which you are a part.

(c) avoid impractical zig-zagging when best guess assumptions change.

Insofar as I'm more into philosophical pragmatism than foundationalism, I'm more inclined to see a messy collection of reasons like these as philosophically adequate.

I think there are two things to justify here:

  1. The commitment to a GHW bucket, where that commitment involves "we want to allocate roughly X% of our resources to this".

  2. The particular interventions we fund within the GHW resource bucket.

I think the justification for (1) is going to look very different to the justification for (2).

I'm not sure which one you're addressing, it sounds like more (2) than (1).

2
Richard Y Chappell
1mo
I'm more interested in (1), but how we justify that could have implications for (2).
2
peterhartree
1mo
My own attraction to a bucket approach (in the sense of (1) above) is motivated by a combination of: (a) reject the demand for commensurability across buckets. (b) make a bet on plausible deontic constraints e.g. duty to prioritise members of the community of which you are a part. (c) avoid impractical zig-zagging when best guess assumptions change. Insofar as I'm more into philosophical pragmatism than foundationalism, I'm more inclined to see a messy collection of reasons like these as philosophically adequate.

Would you be up for spelling out the problem of "lacks adequate philosophical foundations"?

What criteria need to be satisfied for the foundations to be adequate, to your mind?

Do they e.g. include consequentialism and a strong form of impartiality?

3
peterhartree
1mo
I think there are two things to justify here: 1. The commitment to a GHW bucket, where that commitment involves "we want to allocate roughly X% of our resources to this". 2. The particular interventions we fund within the GHW resource bucket. I think the justification for (1) is going to look very different to the justification for (2). I'm not sure which one you're addressing, it sounds like more (2) than (1).

I hesitate to post things like this, because “short, practical advice” posts aren't something I often see on the Forum.

I'm not sure if this is the kind of thing that's worth encouraging as a top-level post.

In general I would like to read more posts like this from EA Forum users, but perhaps not as part of the front page.

3
peterhartree
7mo
Some examples of the kinds of thing I might share, were there an obvious place to do so: * How to start a blog in 5 seconds for $0 * Unfortunately, PDF copies of many academic books & scientific papers can be easily found on the Anna's Archive website * Use Airalo to get mobile data in any country in less than 5 minutes

Thanks for this. I'd be keen to see a longer list of the interesting for-profits in this space.

Biobot Analytics (wastewater monitoring) are the only for-profit on the 80,000 Hours job board list.

3
Alex D
7mo
This is a pretty good overview: https://www.decodingbio.com/p/decoding-biosecurity-and-biodefense I know the space reasonably well, happy to connect and discuss with anyone interested!

As an employer, it's particularly helpful to look at small solo projects. With a solo project it's easy to tell how you contributed—you did everything.

Hi Lia. I think the RSS links above are correct.

To confirm, the RSS links are as follows:

Does this help?

Back in March I asked GPT-4:

I recently graduated from medical school but I'm worried that being a doctor in the rich world is not very impactful, because there are already lots of doctors and I can't help that many people every year by providing primary care. Would 80,000 Hours say that this is a legitimate concern? What might they suggest for someone in my position?

The answer (one shot):

Yes, 80,000 Hours might consider your concern as legitimate. While being a doctor is a valuable and impactful profession, it's true that in the rich world, there are alrea

... (read more)
4
Jamie_Harris
9mo
Yeah, seems fair; asking LLMs to model specific orgs or people might achieve a similar effect without needing the contextual info, if there's much info about those orgs or people in the training data and you don't need it to represent specific ideas or info highlighted in a course's core materials.

Congratulations on making it to 5 years, and thank you for the work you've done so far.

Could you give us a sense of the inputs that lead to these outputs? In particular, I'd be interested to know:

  1. Total expenditure.
  2. Total years of staff labor (full-time equivalent).
8
abrahamrowe
9mo
Thanks for the question! Across its lifetime, RP has spent around: $13,976,000. In terms of FTE-years, RP staff have completed around 95 to 100, and we've funded external collaborators for another 55 to 60, so I'd estimate that in total, the input was something like 150 to 160 FTE-years of work.

The application form is short and includes the following questions:

What are the most important technical projects the UK government needs to do within the next few months to advance AI safety?

What do you want to be doing to drive those projects forward?

4
Stefan_Schubert
10mo
The transcript can be found on this link as well.

Thanks Michael! This was a strange oversight on our part—now fixed.

Great question. If authors include image captions we read them, but I think we're skipping the image alt texts at the moment. We actually wrote the code to read alt texts but I think we forgot to ship it in this first release. This was a mistake on our part—we'll fix it this week or next.

Thanks. It's a nice idea. At some point we might enable authors (or listeners!) to select their favourite voices. This would increase our costs quite a lot (see my reply to Nathan) so I doubt we'll do this before end 2023, unless we find evidence of strong demand.

Thanks! We looked into randomising between a couple voices a while ago. To my surprise, we found that all the voice models on our text-to-speech service (Microsoft Azure) perform somewhat differently. This means our quality assurance costs would go up quite a lot if we start using several voices.

I'd also guess that once listeners become familiar with a particular voice, their comprehension improves and they're able to listen faster. I have some anecdotal evidence of this, but I'm pretty unsure how big of a deal it is.

Thanks! I would have liked to do this, but in our quick tests the UK female voice models provided by our text-to-speech service (Microsoft Azure) were quite buggy. We frequently experiment with the latest voice models on Azure and other platforms, so I expect we'll find a good UK female option in the coming months.

Would you prefer a female narrator?

Sample (Sara, US).

Agree vote if "yes", disagree vote if "no".

8
BrownHairedEevee
11mo
Let each author decide?
2
Angelina Li
11mo
Maybe include female / UK as another reference point, so we're not comparing across two dimensions at once?
4
Nathan Young
11mo
Randomise? Or different narrators for different topics?

Would you prefer an US English narrator?

Sample (Eric, US).

Agree vote if "yes", disagree vote if "no".

For anyone who is considering the course: TYPE III AUDIO is making audio narrations for the Alignment and Governance courses. The series is due to launch later this month, but some 50+ episodes are already available.

Moderators should delete this post IMO.

(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)

Thoughts on “maximisation is perilous”:

(1) We could put more emphasis on the idea of “two-thirds utilitarianism”.

(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.

5
peterhartree
1y
(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)

Thank you (again) for this.

I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.

As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there's lots of convergence, and in practice your ranked list of interventions won't change much (even if the diff between them does... after you adjust for cluelessness, Pascal's mugging, etc).

Some worry that if you're a fan of longterm... (read more)

The message I take is that there's potentially a big difference between these two questions:

  1. Which government policies should one advocate for?
  2. For an impartial individual, what are the best causes and interventions to work on?

Most of effective altruism, including 80,000 Hours, has focused on the second question.

This paper makes a good case for an answer to the first, but doesn't tell us much about the second.

If you only value the lives of the present generation, it's not-at-all obvious that marginal investment in reducing catastrophic risk beats funding Giv... (read more)

Thanks for the post.

I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).

These amounts are small.

Let's say the value of your time is $500 / hour.

I'm not sure it was worth taking the time to think this through so carefully.

To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets.

Agree.

By publicly making a commitment to offset a particular harm, you're establishing a basis

... (read more)

Let's say the value of your time is $500 / hour.

I'm not sure it was worth taking the time to think this through so carefully.

But:

  1. J is thinking this through and posting it to give insight to others, not just for his own case.

  2. If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question

+1 to Geoffrey here.

I still think of EA as a youth movement, though this label is gradually fading as the "founding cohort" matures.

It a trope that the youth are sometimes too quick to dismiss the wiser counsel of their elders.

I've witnessed many cases where, to my mind, people were (admirably) looking for good explicit arguments that they can easily understand, but (regrettably) forgetting that things like inferential distance sometimes make it hard to understand the views of people who are wiser or more expert than you are.

I'm sure I've made this mistake... (read more)

Peter -- nice point about inferential distance. This can lead to misunderstandings from both directions:

Youth can hear elders make an argument that sounds overly opaque, technical, and unfamiliar to them, given the big inferential distance involved (although it would sound utterly clear & persuasive to the elder's professional colleagues), and dismiss it as incoherent.

Elders can see youth ignoring their arguments (which seem utterly clear & persuasive to them), get exasperated that they've invested decades learning about something only to be dismis... (read more)

The CLTR Future Proof report has influenced UK government policy at the highest levels.

E.g. The UK "National AI Strategy ends with a section on AGI risk, and says that the Office for AI should pay attention to this.

If you think the UN matters, then this seems good:

On September 10th 2021, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks.

https://forum.effectivealtruism.org/posts/Fwu2SLKeM5h5v95ww/... (read more)

What matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief.

This is a good expression of the crux.

For many people—including many philosophers—it seems odd to think that questions of justification have nothing to do with us and our origins.

This is why the question of "what are we doing, when we do philosophy?" is so important.

The pragmatist-naturalist perspective says something like:

We are clever beasts on an unremarkable planet orbiting an unremarkabl

... (read more)

I took the Gell-Mann amnesia interpretation and just concluded that he's probably being daft more often in areas I don't know so much about.

This is what Cowen was doing with his original remark.

2
Linch
1y
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of "existential risk" (which I think is a semantics game but others disagree)?  

From Moral Tribes:

Deep pragmatism seeks common ground. Not where we think it ought to be, but where it actually is.

With a little perspective, we can reflect and reach agreements with our heads, despite the irreconcilable differences in our hearts.

We all want to be happy. None of us wants to suffer. And our concern for happiness and suffering lies behind nearly everything else that we value, though to see this requires some reflection.

We can take this kernel of personal value and turn it into a moral value by adding the essence of the Golden Rule: your happ

... (read more)

Joshua Greene's book, Moral Tribes, presents a compelling EDA. He doesn't bother directly arguing against the philosophical objections to EDA.

In general I think Moral Tribes is a must-read for those who are interested in evolutionary psychology, moral philosophy and especially utilitarianism.

Among other things, Greene argues that utilitarianism needs a rebrand. His suggestion: deep pragmatism.

3
peterhartree
1y
From Moral Tribes: Deep pragmatism is utilitarianism in the spirit of Jeremy Bentham. Bentham is often misread as a narrow-minded moral realist. But he is best read as a political pragmatist: not seeking a metaphysical principle, but rather a practical principle—something most of us can agree on—upon which to build a stable polity.

EDAs are a problem for non-naturalistic moral realists in the British tradition (e.g. Sidgwick, Parfit). Some people think they're a problem for naturalistic moral realists too.

I've read ~10 philosophy papers that try to defend non-naturalistic moral realism against EDAs.

More than half of these defences have the following structure:

(P1) Metaethical claim about moral truth.

(P2) EDAs are incompatible with (P1).

(P3) Conclusion: EDAs are false.

A typical metaethical claim for (P1):

(P1*) The normative and the descriptive are fundamentally different (bangs

... (read more)

Peter - yep, that's also my impression so far, that philosophers seem compelled to reject evo debunking arguments because EDAs would render much of moral philosophy's game (trying to systematize & reconcile moral intuitions) both incoherent and irrelevant. So they seem to be scrambling for ad hoc reasons to reject EDAs by any means necessary... and end up promoting spurious arguments.

But, I could be wrong, and there might be some more compelling, principled, and less reactive critiques of EDAs out there.

1
JBentham
1y
On the contrary, non-naturalistic moral realists such as Derek Parfit and Peter Singer note that evolutionary debunking arguments tend to strengthen (some forms of) non-natural moral realism. On an evolutionary account, external reasons for belief and action would seem to be redundant (pure impulse would suffice), yet Parfit and Singer argue for their existence.
2
Peter
1y
Nice, yeah they did mention these. Good to have the links. 

Thanks for sharing.

Tyler Cowen is another figure who takes utilitarianism and longtermism seriously, but not too seriously. See:

Good idea. Are you planning to make this happen? What steps will you take next?

My quick thought is:

  1. Email Tyler. List a few people you think are worth reaching out to.

  2. If Tyler is keen, help him do the work.

It'd probably be worth releasing it in English as well, for an Anglosphere audience.

Tyler released Stubborn Attachments on Medium a year or two before it was published by Stripe Press. He could do the same for this book, with some big caveats at the start, along the lines that he made in the podcast.

If you don't plan to do something like (1) and ... (read more)

1
Matt Brooks
1y
I don't really have the time, skills, or contacts to make this happen, if you want to pick up the torch I would gladly pass it to you. Tyler seems keen although worried about censors: https://twitter.com/tylercowen/status/1614402492518785025 It seems from the podcast he wanted to only release the book in Chinese (maybe especially at this point due to the decline in willingness for the west to work with China) but I'm not sure, maybe the book would help westerns understand China's culture as much as Chinese to understand the west. A lot of great power war-concerned EAs would probably buy the book to get a better insight. If I had to guess, I don't think he needs help finding a translator or transcribing the book to an audiobook or any other particular singular task, I think it's bigger than that. If we could get someone with contacts and backing like OpenPhil to say to Tyler "We will pay all costs to publish the book and assign a project manager to do all of the annoying bits for you" it seems harder for Tyler to turn down, but I'm just guessing. Happy to chat more, if you'd like

I see "clearly expressing anger" and "posting when angry" as quite different things.

I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.

Let's distinguish different stages of anger:

The "hot" kind—when one is not really thinking straight, prone to exaggeration and uncharitable interpretations, etc.

The "cool" kind—where one can think roughly as clearly about the topic as any other.

We could think of "hot" and "cold" anger as a spectrum.

Most people experience hot anger from time to time. But I think EA figures—esp... (read more)

2
RobBensinger
1y
I'm not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when they're feeling some version of "hot anger", as opposed to literally never doing this. The way you defined "cool vs. hot" here is that it's about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldn't post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.) But you also call these "different stages of anger", which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words "hot" and "cool", to my ear, also suggests something about the character of the feeling itself. I feel comfortable suggesting that EAs self-censor under the "thinking straight?" interpretation. But if you're feeling really intense emotion and it's very close in time to the triggering event, but you think you're nonetheless thinking straight — or you think you can add appropriate caveats and context so people can correct for the ways in which you're not thinking straight — then I'm a lot more wary about adding a strong "don't say what's on your mind" norm here.

Thank you for this.

Strong contender for "top 10 EA Forum posts of 2023, according to Peter Hartree".

A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms.

[...]

Why more democratic decision making would be better has gone largely unargued. To the extent it has been, "conflicts of interest" and "insularity" seem like marginal problems compared to basically having a deep understanding of the most important questions for the future/global health and wellbeing.

Agree... (read more)

This post is much too long and we're all going to have trouble following the comments.

It would be much better to split this up and post as a series. Maybe do that, and replace this post with links to the series?

Yup, we're going to split it into a sequence (I think it should be mentioned in the preamble?)

Context: I've worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.

Views my own.

I agree that the heavy use of a poorly defined concept of "value alignment" has some major costs.

I've been moderately on the receiving end of this one. I think it's due to some combination of:

  1. I take Nietzsche seriously (as Derek Parfit did).
  2. I have a strong intellectual immune system. This means it took me several years to get enthusiastically on board with utilitarianism, longtermism and AI safety as focus areas.
... (read more)
Load more