Thanks for this post! I've been meaning to write something similar, and have glad you have :-)
I agree with your claim that most observers like us (who believe they are at the hinge of history) are in (short-lived) simulations. Brian Tomasik discusses how this marginally makes one value interventions with short-term effects.
In particular, if you think the simulations won't include other moral patients simulated to a high resolution (e.g. Tomasik suggests this may be the case for wild animals in remote places), you would instrumentally care less about ... (read more)
This tool is impressive, thanks! I like the framing you use of safety as a race against capabilities, though think don't really know what it would look like to have "solved " AGI safety 20 years before AGI. I also appreciate all the assumptions being listed at the end of the page.
Some minor notes
Thanks for this post! I used to do some voluntary university community building, and some of your insights definitely ring true to me, particularly the Alice example - I'm worried that I might have been the sort of facilitator to not return to the assumptions in fellowships I've facilitated.
A small note:
Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):
This EA Leaders Forum was nearly 3 years ago, and so talent gaps have possibly changed. There was a Meta Coordination Forum last year run ... (read more)
This definitely sounds like a better approach than mine, thanks for sharing! This will be useful for me for any future projects
Thanks for your questions and comments! I really appreciate someone reading through in such detail :-)
- What is the highest probability of encountering aliens in the next 1000 years according to reasonable choices once could make in your model?
SIA (with no simulations) gives the nearest and most numerous aliens.
My bullish prior (which has a priori has 80% credence in us not being alone) with SIA and the assumption that grabby aliens are hiding gives a median of ~ chance in a grabby civilization reaching us in the next 1000 years.
I do... (read more)
Great to see this work!
Thanks!
Re the SIA Doomsday argument, I think that is self-undermining for reasons I've argued elsewhere.
I agree. When I model the existence of simulations like us, SIA does not imply doom (as seen in the marginalised posteriors for in the appendix here).
Further, the simulation case, SIA would prefer human civilization to be atypically likely to become a grabby civilization (this does not happen in my model as I suppose all civs have the same transition chance to become grabby).
... (read more)Re the habitability of pl
Thanks, glad to hear it!
I wrote it in Google Docs, primarily for the ease of getting comments. I then copied it into the EA Forum editor and spent a few hours fixing the formatting - all the maths had to be rewritten, all footnote added back in, tables fixed, image captions added - which was a bit of a hassle.
I sadly don't have any neat tricks. I tried this Google Docs tool to convert to Markdown but it didn't work well.
The EA Forum editor now have the ability to share drafts and allow comments and collaborative editing, which I think I'll try ... (read more)
This looks great, thanks for creating it! I could see it becoming a great 'default' place for EAs to meet for coworking or social things.
+1
I think Hanson et al. mention something like this too
Thanks! I've considered it but have not decided whether I will. I'm unsure whether the decision relevant parts (which I see as most important) or weirder stuff (like simulations) would need to be cut.
Thanks for this post! I hadn't heard of Dysonian SETI before.
I'm wondering what your thoughts are on how one would promote Dysonian SETI? On the margin is this just scaling back existing 'active' SETI? Beyond attempts at xenoarchaeology in our solar system (which I think are practically certain to not turn up anything) I'm wondering else is in this space
A side note: this idea reminds me of the plot of the Mass Effect games!
The link to your post isn't working for me
A diagram to show possible definitions of existential risks (x-risks) and suffering risks (s-risks)
The (expected) value & disvalue of the entire world’s past and future can be placed on the below axes (assuming both are finite).
By these definitions:
I find the framing of "experience slices" definitely pushes my intuitions in the same direction.
One question I like to think about is whether I'd choose to gain either
(a) a neutral experience
or
(b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but I'd almost certainly not take option (b). I'd guess there's likely risk aversion intuition also being snuck here too though.
Thanks for the post!
I'd recommend Daniel Kestenholz's energy log post for a system and template for tracking energy throughout the day.
Not 128kb (Slack resized it for me) but this worked for me
Both links to Catalyst are broken (I think they're missing https://)
I really liked this post and made me think! Here are some stray thoughts which I'm not super confident in:
The blogger gwern has many posts on self-experiments here.
Thanks for such a detailed and insightful response Gregory.
Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.
Thanks for pointing this out. I think I realised this extra bullet biting after making the post.
... (read more)There's also the worry in a pairwise comp
Suppose you think only suffering counts* (absolute negative utilitarian), then the 'negative totalism' population axiology seems pretty reasonable to me.
The axiology does entail the 'Omela Conclusion' (OC), an analogue of the Repugnant Conclusion (RC), which states that for any state of affairs there is a better state in which a single life is hellish and everyone else's life is free from suffering. As a form of totalism, the axiology does not lead to an analogue of the sadistic conclusion and is non-anti-egalitarian.
The OC (supposing absolute negati... (read more)
Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.
In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to... (read more)
Hello! I'm a maths master's student at Cambridge and have been involved with student groups for the last few years. I've been lurking on the forum for a long time and want to become more active. Hopefully this is the first comment of many!
I agree with what you say, though would note
(1) maybe doom should be disambiguated between "the short-lived simulation that I am in is turned of"-doom (which I can't really observe) and "the basement reality Earth I am in is turned into paperclips by an unaligned AGI"-type doom.
(2) conditioning on me being in at least one short-lived simulation, if the multiverse is sufficiently large and the simulation containing me is sufficiently 'lawful' then I may also expect there to be basement reality copies of me too. In this case, doom is implied for (what I would guess is) most exact copies of me.