All of Tristan Cook's Comments + Replies

Replicating and extending the grabby aliens model

I agree with what you say, though would note

(1) maybe doom should be disambiguated between  "the short-lived simulation that I am in is turned of"-doom (which I can't really observe) and "the basement reality Earth I am in is turned into paperclips by an unaligned AGI"-type doom.

(2) conditioning on me being in at least one short-lived simulation, if the multiverse is sufficiently large and the simulation containing me is sufficiently 'lawful' then I may also expect there to be basement reality copies of me too. In this case,  doom is implied for (what I would guess is) most exact copies of me.

1Lukas_Finnveden5d
Yup, I agree the disambiguation is good. In aliens-context, it's even useful to disambiguate those types of doom from "Intelligence never leaves the basement reality Earth I am on"-doom. Since paperclippers probably would become grabby.
Is Our Universe A Newcomb’s Paradox Simulation?

Thanks for this post! I've been meaning to write something similar, and have glad you have :-)

I agree with your claim that most observers like us (who believe they are at the hinge of history) are in (short-lived) simulations. Brian Tomasik discusses how this marginally makes one value interventions with short-term effects. 

In particular, if you think the simulations won't include other moral patients simulated to a high resolution (e.g. Tomasik suggests this may be the case for wild animals in remote places), you would instrumentally care less about ... (read more)

2PaulCousens8d
Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be running a simulation of us at this critical period of history to see what would have lead to counterfactual histories in which we adopted longtermism and thus protected ourselves from them. They would use these simulations to be better prepared for humans that might be evolving or have evolved in distant parts of the universe that they haven't accessed yet. Or maybe they still enslave a small or large portion of humanity, and are using the simulations to determine whether it is feasible or worthwhile to let us free again, or even whether it is safe for them to let the remaining human prisoners continue living. In this case, hedonism would be more miserable.
3Jordan Arel9d
Thank you for this reply! Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses. And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as. Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more. I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI. It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calcu
Fermi estimation of the impact you might have working on AI safety

This tool is impressive, thanks! I like the framing you use of safety as a race against capabilities, though think don't really know what it would look like to have "solved " AGI safety 20 years before AGI. I also appreciate all the assumptions being listed at the end of the page.

Some minor notes

  • the GitHub link in the webpage footer points to the wrong page
  • I think two of the prompts "How likely is it to work?" and "How much do you speed it up?" would be made clearer if "it" was replaced by AGI safety (if that is what it is referring to).
1frib10d
Thank you for the feedback. It's fixed now!
Bad Omens in Current Community Building

Thanks for this post! I used to do some voluntary university community building, and some of your insights definitely ring true to me, particularly the Alice example - I'm worried that I might have been the sort of facilitator to not return to the assumptions in fellowships I've facilitated.

A small note:

Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):

This EA Leaders Forum was nearly 3 years ago, and so talent gaps have possibly changed. There was a Meta Coordination Forum last year run ... (read more)

Replicating and extending the grabby aliens model

This definitely sounds like a better approach than mine, thanks for sharing! This will be useful for me for any future projects

Replicating and extending the grabby aliens model

Thanks for your questions and comments! I really appreciate someone reading through in such detail :-)

  • What is the highest probability of encountering aliens in the next 1000 years according to reasonable choices once could make in your model?

SIA  (with no simulations) gives the nearest and most numerous aliens. 

My bullish prior (which has a priori has 80% credence in us not being alone) with SIA and the assumption that grabby aliens are hiding gives a median of ~ chance in a grabby civilization reaching us in the next 1000 years.

I do... (read more)

2kokotajlod18d
Don't you mean 1-that?
Replicating and extending the grabby aliens model

Great to see this work!

Thanks!

 Re the SIA Doomsday argument, I think that  is self-undermining for reasons I've argued elsewhere.

I agree. When I model the existence of simulations like us, SIA does not imply doom (as seen in the marginalised posteriors for  in the appendix here). 

Further, the simulation case, SIA would prefer human civilization to be atypically likely to become a grabby civilization (this does not happen in my model as I suppose all civs have the same transition chance to become grabby).

Re the habitability of pl

... (read more)
4Lukas_Finnveden5d
It does imply doom for us, since we're almost certainly in a short-lived simulation. And if we condition on being outside of a simulation, SIA also implies doom for us, since it's more likely that we'll find ourselves outside of a simulation if there are more basement-level civilizations, which is facilitated by more of them being doomed. It just implies that there weren't necessarily a lot of doomed civilizations in the basement-level universe, many basement-level years ago, when our simulators were a young civilization.
2CarlShulman17d
That's my read too. Also agreed that with the basic modeling element of catastrophes (w/ various anthropic accounts, etc) is more important/robust than the combo with other anthropic assumptions,.
Replicating and extending the grabby aliens model

Thanks, glad to hear it!

I wrote it in Google Docs, primarily for the ease of getting comments. I then copied it into the EA Forum editor and spent a few hours fixing the formatting - all the maths had to be rewritten, all footnote  added back in, tables fixed, image captions added - which was a bit of a hassle. 

I sadly don't have any neat tricks. I tried this Google Docs tool to convert to Markdown but it didn't work well.

The EA Forum editor now have the ability to share drafts and allow comments and collaborative editing, which I think I'll try ... (read more)

EA coworking/lounge space on gather.town

This looks great, thanks for creating it! I could see it becoming a great 'default' place for EAs to meet for coworking or social things.

1Arepo1mo
That would be awesome :)
Replicating and extending the grabby aliens model

Thanks! I've considered it but have not decided whether I will. I'm unsure whether the decision relevant parts (which I see as most important) or weirder stuff (like simulations) would need to be cut. 

Ben.Hartley's Shortform

Thanks for this post! I hadn't heard of Dysonian SETI before.

I'm wondering what your thoughts are on how one would promote Dysonian SETI? On the margin is this just scaling back existing 'active' SETI? Beyond attempts at xenoarchaeology in our solar system (which I think are practically certain to not turn up anything) I'm wondering else is in this space

A side note: this idea reminds me of the plot of the Mass Effect games!

1Ben.Hartley2mo
Dysonian SETI in practice has two prongs: * Theoretical: This is the field that hypothesizes the existence of observable artifacts such as Dyson Spheres, Von Neumann Probes, or Green Stars. (Along with more mundane observable artifacts such as terrestrial infrastructure or satellites.) * Observational: This is the field that looks for evidence of these artifacts via bigger and better telescopes essentially. So ultimately bigger and better satellites are what's in this space.
“cocoons”: an idea to critique.

The link to your post isn't working for me

1Thomas2mo
Thanks for letting me know, Tristan. It was a Medium draft. Please try again.
Tristan Cook's Shortform

A diagram to show possible definitions of existential risks (x-risks) and suffering risks (s-risks)

The (expected) value & disvalue of the entire world’s past and future can be placed on the below axes (assuming both are finite).

By these  definitions:

  • Some x-risks are s-risks
  • Not all s-risks are x-risks
antimonyanthony's Shortform

I find the framing of "experience slices" definitely pushes my intuitions in the same direction.

One question I like to think about is whether I'd choose to gain either
(a) a neutral experience
or 
(b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails

My life feels highly net positive but I'd almost certainly not take option (b). I'd guess there's likely risk aversion intuition also being snuck here too though.

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Thanks for the post!

I'd recommend Daniel Kestenholz's energy log post  for a system and template for tracking energy throughout the day. 

Practical ethics given moral uncertainty

From 1. "the same ballpark as murder" the Internet Archive has it saved here
The link in 3 "in the same ballpark as walking past a child drowning in a shallow pond" is also dead, but  is in the Internet archive here

Edit: the link in 2 is also archived here

Linch's Shortform

Not 128kb (Slack resized it for me) but this worked for me

2Linch1y
Thank you!
Retrospective on Catalyst, a 100-person biosecurity summit

Both links to Catalyst are broken (I think they're missing https://)

2Aaron Gertler1y
That was the issue -- just fixed the links.
Exploring a Logarithmic Tolerance of Suffering

I really liked this post and made me think! Here are some stray thoughts which I'm not super confident in:

  • Something similar to  Linear Tolerance and No Significant Tolerance are called negative-leaning utilitarianism  (or weak negative utilitarianism) and lexical-threshold negative utilitarianism (see here or here)
  • It seems like logarithmic trade-offs are just linear tolerance where we've scaled (exponentially) all original suffering values  . I'm not sure if it's just easier just to think the suffering values were already this &nb
... (read more)
1David Reber1y
Here I'm using x and y to denote amounts of suffering/happiness, whether constrained to one individual or spread among many (or even distributed among some non-individualistic sentience). Using exponentially-scaled linear tolerance seems equivalent mathematically. If anything, it highlights to me that how you define the measures for happiness and suffering is quite impactful, and needs to be carefully considered.
MaxG's Shortform

The blogger gwern has many posts on self-experiments  here.

1Max Görlitz1y
Cool! I knew gwern but wasn't aware of his experiments, thank you.
Tristan Cook's Shortform

Thanks for such a detailed and insightful response Gregory.

Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

Thanks for pointing this out. I think I realised this extra bullet biting after making the post.

There's also the worry in a pairwise comp

... (read more)
Tristan Cook's Shortform

Suppose you think only suffering counts* (absolute negative utilitarian), then the 'negative totalism' population axiology seems pretty reasonable to me.

The axiology does entail the 'Omela Conclusion' (OC), an analogue of the Repugnant Conclusion (RC), which states that for any state of affairs there is a better state in which a single life is hellish and everyone else's life is free from suffering. As a form of totalism, the axiology does not lead to an analogue of the sadistic conclusion  and is non-anti-egalitarian.

The OC (supposing absolute negati... (read more)

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to... (read more)

Open and Welcome Thread: March 2021

Hello! I'm a maths master's student at Cambridge and have been involved with student groups for the last few years. I've been lurking on the forum for a long time and want to become more active. Hopefully this is the first comment of many!

4Ben_West1y
Welcome Tristan!
2OllieBase1y
Great to see you're still involved, Tristan! Looking forward to reading your thoughts :)
3Aaron Gertler1y
Welcome! If you ever have an idea for a post you'd like to write, I'd be glad to read over a draft [https://forum.effectivealtruism.org/posts/ZeXqBEvABvrdyvMzf/editing-available-for-ea-forum-drafts] . And be sure to check out the useful links page [https://forum.effectivealtruism.org/posts/fd3iQRkmCKCCL289u/new-start-here-useful-links] for other resources you mind find useful.