I read the paper, then asked Claude 3 to summarise. I endorse the following summary as accurate:
...The key argument in this paper is that Buddhism is not necessarily committed to pessimistic views about the value of unawakened human lives. Specifically, the author argues against two possible pessimistic Buddhist positions:
- The Pessimistic Assumption - the view that any mental state characterized by dukkha (dissatisfaction, unease, suffering) is on balance bad.
- The Pessimistic Conclusion - the view that over the course of an unawakened life, dukkha will al
Thanks for this.
I take the central claim to be:
Even if an experience contains some element of dukkha, it can still be good overall if its positive features outweigh the negative ones. The mere presence of dukkha does not make an experience bad on balance.
I agree with this, and also agree that it's often overlooked.
Perhaps it’s just the case that the process of moral reflection tends to cause convergence among minds from a range of starting points, via something like social logic plus shared evolutionary underpinnings.
Yes. And there are many cases where evolution has indeed converged on solutions to other problems[1].
Some examples:
(Copy-pasted from Claude 3 Opus. They pass my eyeball fact-check.)
My own attraction to a bucket approach (in the sense of (1) above) is motivated by a combination of:
(a) reject the demand for commensurability across buckets.
(b) make a bet on plausible deontic constraints e.g. duty to prioritise members of the community of which you are a part.
(c) avoid impractical zig-zagging when best guess assumptions change.
Insofar as I'm more into philosophical pragmatism than foundationalism, I'm more inclined to see a messy collection of reasons like these as philosophically adequate.
I think there are two things to justify here:
The commitment to a GHW bucket, where that commitment involves "we want to allocate roughly X% of our resources to this".
The particular interventions we fund within the GHW resource bucket.
I think the justification for (1) is going to look very different to the justification for (2).
I'm not sure which one you're addressing, it sounds like more (2) than (1).
Would you be up for spelling out the problem of "lacks adequate philosophical foundations"?
What criteria need to be satisfied for the foundations to be adequate, to your mind?
Do they e.g. include consequentialism and a strong form of impartiality?
Some examples of the kinds of thing I might share, were there an obvious place to do so:
I hesitate to post things like this, because “short, practical advice” posts aren't something I often see on the Forum.
I'm not sure if this is the kind of thing that's worth encouraging as a top-level post.
In general I would like to read more posts like this from EA Forum users, but perhaps not as part of the front page.
Thanks for this. I'd be keen to see a longer list of the interesting for-profits in this space.
Biobot Analytics (wastewater monitoring) are the only for-profit on the 80,000 Hours job board list.
As an employer, it's particularly helpful to look at small solo projects. With a solo project it's easy to tell how you contributed—you did everything.
Hi Lia. I think the RSS links above are correct.
To confirm, the RSS links are as follows:
Does this help?
Back in March I asked GPT-4:
I recently graduated from medical school but I'm worried that being a doctor in the rich world is not very impactful, because there are already lots of doctors and I can't help that many people every year by providing primary care. Would 80,000 Hours say that this is a legitimate concern? What might they suggest for someone in my position?
The answer (one shot):
...Yes, 80,000 Hours might consider your concern as legitimate. While being a doctor is a valuable and impactful profession, it's true that in the rich world, there are alrea
Congratulations on making it to 5 years, and thank you for the work you've done so far.
Could you give us a sense of the inputs that lead to these outputs? In particular, I'd be interested to know:
The application form is short and includes the following questions:
What are the most important technical projects the UK government needs to do within the next few months to advance AI safety?
What do you want to be doing to drive those projects forward?
Ideally this link post would point to the episode page on Dwarkesh's website, which includes a transcript.
Great question. If authors include image captions we read them, but I think we're skipping the image alt texts at the moment. We actually wrote the code to read alt texts but I think we forgot to ship it in this first release. This was a mistake on our part—we'll fix it this week or next.
Thanks. It's a nice idea. At some point we might enable authors (or listeners!) to select their favourite voices. This would increase our costs quite a lot (see my reply to Nathan) so I doubt we'll do this before end 2023, unless we find evidence of strong demand.
Thanks! We looked into randomising between a couple voices a while ago. To my surprise, we found that all the voice models on our text-to-speech service (Microsoft Azure) perform somewhat differently. This means our quality assurance costs would go up quite a lot if we start using several voices.
I'd also guess that once listeners become familiar with a particular voice, their comprehension improves and they're able to listen faster. I have some anecdotal evidence of this, but I'm pretty unsure how big of a deal it is.
Thanks! I would have liked to do this, but in our quick tests the UK female voice models provided by our text-to-speech service (Microsoft Azure) were quite buggy. We frequently experiment with the latest voice models on Azure and other platforms, so I expect we'll find a good UK female option in the coming months.
Would you prefer an US English narrator?
Agree vote if "yes", disagree vote if "no".
For anyone who is considering the course: TYPE III AUDIO is making audio narrations for the Alignment and Governance courses. The series is due to launch later this month, but some 50+ episodes are already available.
(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)
Thoughts on “maximisation is perilous”:
(1) We could put more emphasis on the idea of “two-thirds utilitarianism”.
(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.
Thank you (again) for this.
I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.
As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there's lots of convergence, and in practice your ranked list of interventions won't change much (even if the diff between them does... after you adjust for cluelessness, Pascal's mugging, etc).
Some worry that if you're a fan of longterm...
The message I take is that there's potentially a big difference between these two questions:
Most of effective altruism, including 80,000 Hours, has focused on the second question.
This paper makes a good case for an answer to the first, but doesn't tell us much about the second.
If you only value the lives of the present generation, it's not-at-all obvious that marginal investment in reducing catastrophic risk beats funding Giv...
Thanks for the post.
I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).
These amounts are small.
Let's say the value of your time is $500 / hour.
I'm not sure it was worth taking the time to think this through so carefully.
To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets.
Agree.
...By publicly making a commitment to offset a particular harm, you're establishing a basis
Let's say the value of your time is $500 / hour.
I'm not sure it was worth taking the time to think this through so carefully.
But:
J is thinking this through and posting it to give insight to others, not just for his own case.
If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question
+1 to Geoffrey here.
I still think of EA as a youth movement, though this label is gradually fading as the "founding cohort" matures.
It a trope that the youth are sometimes too quick to dismiss the wiser counsel of their elders.
I've witnessed many cases where, to my mind, people were (admirably) looking for good explicit arguments that they can easily understand, but (regrettably) forgetting that things like inferential distance sometimes make it hard to understand the views of people who are wiser or more expert than you are.
I'm sure I've made this mistake...
Peter -- nice point about inferential distance. This can lead to misunderstandings from both directions:
Youth can hear elders make an argument that sounds overly opaque, technical, and unfamiliar to them, given the big inferential distance involved (although it would sound utterly clear & persuasive to the elder's professional colleagues), and dismiss it as incoherent.
Elders can see youth ignoring their arguments (which seem utterly clear & persuasive to them), get exasperated that they've invested decades learning about something only to be dismis...
The CLTR Future Proof report has influenced UK government policy at the highest levels.
E.g. The UK "National AI Strategy ends with a section on AGI risk, and says that the Office for AI should pay attention to this.
If you think the UN matters, then this seems good:
On September 10th 2021, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks.
https://forum.effectivealtruism.org/posts/Fwu2SLKeM5h5v95ww/...
What matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief.
This is a good expression of the crux.
For many people—including many philosophers—it seems odd to think that questions of justification have nothing to do with us and our origins.
This is why the question of "what are we doing, when we do philosophy?" is so important.
The pragmatist-naturalist perspective says something like:
...We are clever beasts on an unremarkable planet orbiting an unremarkabl
I took the Gell-Mann amnesia interpretation and just concluded that he's probably being daft more often in areas I don't know so much about.
This is what Cowen was doing with his original remark.
From Moral Tribes:
...Deep pragmatism seeks common ground. Not where we think it ought to be, but where it actually is.
With a little perspective, we can reflect and reach agreements with our heads, despite the irreconcilable differences in our hearts.
We all want to be happy. None of us wants to suffer. And our concern for happiness and suffering lies behind nearly everything else that we value, though to see this requires some reflection.
We can take this kernel of personal value and turn it into a moral value by adding the essence of the Golden Rule: your happ
Joshua Greene's book, Moral Tribes, presents a compelling EDA. He doesn't bother directly arguing against the philosophical objections to EDA.
In general I think Moral Tribes is a must-read for those who are interested in evolutionary psychology, moral philosophy and especially utilitarianism.
Among other things, Greene argues that utilitarianism needs a rebrand. His suggestion: deep pragmatism.
EDAs are a problem for non-naturalistic moral realists in the British tradition (e.g. Sidgwick, Parfit). Some people think they're a problem for naturalistic moral realists too.
I've read ~10 philosophy papers that try to defend non-naturalistic moral realism against EDAs.
More than half of these defences have the following structure:
(P1) Metaethical claim about moral truth.
(P2) EDAs are incompatible with (P1).
(P3) Conclusion: EDAs are false.
A typical metaethical claim for (P1):
...(P1*) The normative and the descriptive are fundamentally different (bangs
Peter - yep, that's also my impression so far, that philosophers seem compelled to reject evo debunking arguments because EDAs would render much of moral philosophy's game (trying to systematize & reconcile moral intuitions) both incoherent and irrelevant. So they seem to be scrambling for ad hoc reasons to reject EDAs by any means necessary... and end up promoting spurious arguments.
But, I could be wrong, and there might be some more compelling, principled, and less reactive critiques of EDAs out there.
The conversation touches on a couple of blog posts by Holden, especially:
Thanks for sharing.
Tyler Cowen is another figure who takes utilitarianism and longtermism seriously, but not too seriously. See:
Good idea. Are you planning to make this happen? What steps will you take next?
My quick thought is:
Email Tyler. List a few people you think are worth reaching out to.
If Tyler is keen, help him do the work.
It'd probably be worth releasing it in English as well, for an Anglosphere audience.
Tyler released Stubborn Attachments on Medium a year or two before it was published by Stripe Press. He could do the same for this book, with some big caveats at the start, along the lines that he made in the podcast.
If you don't plan to do something like (1) and ...
I see "clearly expressing anger" and "posting when angry" as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Let's distinguish different stages of anger:
The "hot" kind—when one is not really thinking straight, prone to exaggeration and uncharitable interpretations, etc.
The "cool" kind—where one can think roughly as clearly about the topic as any other.
We could think of "hot" and "cold" anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figures—esp...
Thank you for this.
Strong contender for "top 10 EA Forum posts of 2023, according to Peter Hartree".
A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms.
[...]
Why more democratic decision making would be better has gone largely unargued. To the extent it has been, "conflicts of interest" and "insularity" seem like marginal problems compared to basically having a deep understanding of the most important questions for the future/global health and wellbeing.
Agree...
This post is much too long and we're all going to have trouble following the comments.
It would be much better to split this up and post as a series. Maybe do that, and replace this post with links to the series?
Context: I've worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.
Views my own.
I agree that the heavy use of a poorly defined concept of "value alignment" has some major costs.
I've been moderately on the receiving end of this one. I think it's due to some combination of:
On short stories, two notable examples: Narrative Ark by Richard Ngo and the regular "Tech Tales" section at the end of Jack Clark's Import AI newsletter.