All of Tristan Cook's Comments + Replies

Thanks again Phil for taking the read this through and for the in-depth feedback.

I hope to take some time to create a follow-up post, working in your suggestions and corrections as external updates (e.g. to the parameters of lower total AI risk funding, shorter Metaculus timelines). 

I don't know if the “only one big actor” simplification holds closely enough in the AI safety case for the "optimization" approach to be a better guide, but it may well be.

This is a fair point.

The initial motivator for the project was for AI s-risk funding, of which there'... (read more)

Strong agreement that a global moratorium would be great.

I'm unsure if aiming for a global moratorium is the best thing to aim for rather than a slowing of the race-like behaviour -- maybe a relevant similar case is whether to aim directly for the abolition of factory farms or just incremental improvements in welfare standards.

This post from last year - What an actually pessimistic containment strategy looks like -  has some good discussion on the topic of slowing down AGI research.

4
Greg_Colbourn
2mo
Loudly and publicly calling for a global moratorium should have the effect of slowing down race-like behaviour, even if it is ultimately unsuccessful. We can at least buy some more time, it's not all or nothing in that sense. And more time can be used to buy yet more time, etc. Factory farming is an interesting analogy, but the trade-off is different. You can think about whether abolitionism or welfarism has higher EV over the long term, but the stakes aren't literally the end of the world if factory farming continues to gain power for 5-15 more years (i.e. humanity won't end up in them). The linked post is great, thanks for the reminder of it (and good to see it so high up the All Time top LW posts now). Who wants to start the institution lc talks about at the end? Who wants to devote significant resources to working on convincing AGI capabilities researchers to stop?

Thanks for the transcript and sharing this. The coverage seems pretty good, and the airplane crash analogy seems pretty helpful for communicating  - I expect to use it in the future!

I agree. This lines with models of optimal spending I worked on which allowed for a post-fire alarm "crunch time" in which one can spend a significant fraction of remaining capital.

I think "different timelines don't change the EV of different options very much" plus "personal fit considerations can change the EV of a PhD by a ton" does end up resulting in an argument for the PhD decision not depending much on timelines. I think that you're mostly disagreeing with the first claim, but I'm not entirely sure.

Yep, that's right that I'm disagreeing with the first claim.  I think one could argue the main claim either by:

  1. Regardless of your timelines, you (person considering doing a PhD) shouldn't take it too much into consideration
  2. I (a
... (read more)

I think you raise some good considerations but want to push back a little.

I agree with your arguments that
- we shouldn't use point estimates (of the median AGI date)
- we shouldn't fully defer to (say) Metaculus estimates.
- personal fit is important

But I don't think you've argued that "Whether you should do a PhD doesn't depend much on timelines."

Ideally as a community we can have a guess at the optimal number of people in the community that should do PhDs (factoring in their personal fit etc) vs other paths.

I don't think this has been done, but since most ... (read more)

6
alex lawsen (previously alexrjl)
2mo
I think "different timelines don't change the EV of different options very much" plus "personal fit considerations can change the EV of a PhD by a ton" does end up resulting in an argument for the PhD decision not depending much on timelines. I think that you're mostly disagreeing with the first claim, but I'm not entirely sure. In terms of your point about optimal allocation, my guess is that we disagree to some extent about how much the optimal allocation has changed, but that the much more important disagreement is about whether some kind of centrally planned 'first decide what fraction of the community should be doing what' approach is a sensible way of allocating talent, where my take is that it usually isn't. I have a vague sense of this talent allocation question having been discussed a bunch, but don't have write-up that immediately comes to mind that I want to point to. I might write something about this at some point, but I'm afraid it's unlikely to be soon. I realise that I haven't argued for my talent allocation claim at all, which might be frustrating, but it seemed better to highlight the disagreement at all than ignore it, given that I didn't have the time to explain in detail.

Thanks for the post!

In this post, I'll argue that when counterfactual reasoning is applied the way Effective Altruist decisions and funding occurs in practice, there is a preventable anti-cooperative bias that is being created, and that this is making us as a movement less impactful than we could be.

One case I've previously thought about is that some naive forms of  patient philanthropy could be like this - trying to take credit for spending on the "best"  interventions.

 I've polished a old draft and posted it as short-form with some discuss... (read more)

Some takes on patient philanthropy

Epistemic status: I’ve done work suggesting that AI risk funders be spending at a higher rate, and I'm confident in this result. The other takes are less informed!

I discuss

  • Whether I think we should be spending less now
  • Useful definitions of patient philanthropy
  • Being specific about empirical beliefs that push for more patience  
  • When patient philanthropy is counterfactual
  • Opportunities for donation trading between patient and non-patient donors

Whether I think we should be spending less now

In principle I think th... (read more)

DM = digital mind

Archived version of the post (with no comments at the time of the archive). The post is also available on the Sentience Institute blog

I think you are mistaken on how Gift Aid / payroll giving works in the UK (your footnote 4), it only has an effect once you are a higher rate or additional rate taxpayer. I wrote some examples up here. As a basic rate taxpayer you don't get any benefit - only the charity does.

Thanks for the link to your post!  I'm a bit confused about where I'm mistaken. I wanted to claim that: 

 (ignoring payroll giving or claiming money back from HMRC, as you discuss in yoir post) taking a salary cut (while at the 40% marginal tax rate)  is more effici... (read more)

2
Rasool
3mo
Ah gotcha, re: the pay cut thing then yes 100%, not least because employers also pay national insurance of 13.8%! So your employer is paying 13.8%, then you are paying 40% income tax, and 2% employee national insurance. And gift aid / payroll giving is pretty good, but not that good!

How I think we should do anthropics

I think we should reason in terms of decisions  and not use anthropic updates or probabilities at all. This is what is argued for in Armstrong's Anthropic Decision Theory, which itself is a form of updateless decision theory.

In my mind, this resolves a lot of confusion around anthropic problems when they're reframed as decision problems. 

If I had to pick a traditional anthropic theory...

I'd pick, in this order,

  1. Minimal reference class SSA
  2. SIA
  3. Non-minimal reference class SSA

I choose this ordering because both minima... (read more)

Anthropics: my understanding/explanation and takes

In this first comment, I stick with the explanations. In  sub-comments, I'll give my own takes

Setup

We need the following  ingredients

  • A non-anthropic prior over worlds  where  [1]
  • A set  of all the observers in  for each .
  • A subset of observers   of observers  in each world  for each  that contain your exact current observer moment
    • Note it's possible to have some  empty - worlds in
... (read more)
4
Tristan Cook
3mo
HOW I THINK WE SHOULD DO ANTHROPICS I think we should reason in terms of decisions  and not use anthropic updates or probabilities at all. This is what is argued for in Armstrong's Anthropic Decision Theory [https://arxiv.org/abs/1110.6437], which itself is a form of updateless decision theory. In my mind, this resolves a lot of confusion around anthropic problems when they're reframed as decision problems.  IF I HAD TO PICK A TRADITIONAL ANTHROPIC THEORY... I'd pick, in this order, 1. Minimal reference class SSA 2. SIA 3. Non-minimal reference class SSA I choose this ordering because both minimal reference class SSA and SIA can give the 'best' decisions (ex-ante optimal ones) in anthropic problems,[1] when paired with the right decision theory.  Minimal reference class SSA needs pairing with an evidential-like decision theory, or one that supposes you are making choices for all your copies. SIA needs pairing with a causal-like decision theory (or one that does not suppose your actions give evidence for, or directly control, the actions of your copies). Since I prefer the former set of decision theories, I prefer minimal reference class SSA to SIA. Non-minimal reference class SSA, meanwhile, cannot be paired with any (standard) decision theory to get ex-ante optimal decisions in anthropic problems. For more on this, I highly recommend Oesterheld & Conitzer's Can de se choice be ex ante reasonable in games of imperfect recall? [https://www.andrew.cmu.edu/user/coesterh/DeSeVsExAnte.pdf]  1. ^ For example, the sleeping beauty problem or the absent-minded driver problem

My recommendations for small donors

I think there are benefits to thinking about where to give (fun, having engagement with the community, skill building, fuzzies)[1] but I think that most people shouldn’t think too much about it and - if they are deciding where to give - should do one of the following.

1 Give to the donor lottery

I primarily recommend giving through a donor lottery and then only thinking about where to give in the case you win. There are existing arguments for the donor lottery.

2 Deliberately funge with funders you trust

Alternatively I ... (read more)

2
Rasool
3mo
* Some interesting discussion at taking a pay cut when working in something directly here [https://forum.effectivealtruism.org/posts/un3fHR8azykyCoCui/passing-up-pay]. * I think you are mistaken on how Gift Aid / payroll giving works in the UK (your footnote 4), it only has an effect once you are a higher rate or additional rate taxpayer. I wrote some examples up here [https://forum.effectivealtruism.org/posts/KsgmLHwqRj7fZ9szo/uk-personal-finance-tips-and-info#Giving]. As a basic rate taxpayer you don't get any benefit - only the charity does. * My impression is that people within EA already defer too much in their donation choices and so should be spending more time thinking about how and where to give, what is being missed by Givewell/OP etc. Or defer some (large) proportion of their giving to EA causes but still have a small amount for personal choices.

Using  goal factoring on tasks with ugh fields

Summary: Goal factor ugh tasks (listing the reasons for completing the task) and then generate multiple tasks that achieve each subgoal.

Example: email

 I sometimes am slow to reply to email and develop an ugh-field around doing it. Goal factoring "reply to the email" into

  • complete sender's request
  • be polite to the sender (i.e. don't take ages to respond)

one can see that the first sub-goal may take some time (and maybe is the initial reason for not doing it straight away), the second sub-goal is easy! One... (read more)

Do you have any updates / plan to publish anything about the Monte Carlo simulation approach you write about in footnote 3?

Thanks for the post! I thought it was interesting and thought-provoking, and I really enjoy posts like this one that get serious about building models. 

Thanks :-)

One thought I did have about the model is that (if I'm interpreting it right) it seems to assume a 100% probability of fast takeoff (from strong AGI to ASI/the world totally changing), which isn't necessarily consistent with what most forecasters are predicting. For example, the Metaculus forecast for years between GWP growth >25% and AGI assigns a ~25% probability that it will be at least

... (read more)

Thanks for putting it together! I'll give this a go in the next few weeks :-)

In the past I've enjoyed doing the YearCompass.

Thanks!

And thanks for the suggestion, I've created a version of the model using a Monte Carlo simulation here :-)

4
Vasco Grilo
6mo
Great, I will have a look!

This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.

This has also been added to the post’s appendix and assumes some familiarity with the post.

Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spending on research and influence, rather... (read more)

Previously the benefactor has been Carl Shulman (and I'd guess he is again, but this is pure speculation). From 2019-2020 donor lottery page:

Carl Shulman will provide backstop funding for the lotteries from his discretionary funds held at the Centre for Effective Altruism.

The funds mentioned are likely these $5m from March 2018:

The Open Philanthropy Project awarded a grant of $5 million to the Centre for Effective Altruism USA (CEA) to create and seed a new discretionary fund that will be administered by Carl Shulman

Answer by Tristan CookNov 24, 202214
🙌1
❤️2

I'll be giving to the EA Funds donor lottery (hoping it's announced soon :-D )

5
Lorenzo Buonanno
6mo
The link is now https://www.givingwhatwecan.org/donor-lottery [https://www.givingwhatwecan.org/donor-lottery] and it has been announced right after this comment! https://forum.effectivealtruism.org/posts/GnJQaSaXRebZgrmg3/the-2022-giving-what-we-can-donor-lottery-is-now-open [https://forum.effectivealtruism.org/posts/GnJQaSaXRebZgrmg3/the-2022-giving-what-we-can-donor-lottery-is-now-open] 

This is great to hear! I'm personally more excited by quality-of-life improvement interventions rather than saving lives so really grateful for this work.

Echoing kokotajlod's question for GiveWell's recommendations, do you have a sense of whether your recommendations change with a very high discount rate (e.g. 10%)? Looking at the graph of GiveDirectly vs StrongMinds it looks like the vast majority of benefits are in the first ~4 years

Minor note: the link at the top of the page is broken (I think the 11/23 in the URL needs to be changed to 11/24)

5
MichaelPlant
6mo
Do you mean a pure time discount rate, or something else? I think a pure-time discount rate would actually boost the case for StrongMinds, right? Regarding cash vs therapy, the benefits from therapy happen more so at the start. Regarding saving lives vs improving lives, the benefit of a saved life presumably applies over the many extra years the person lives for.

When LessWrong posts are crossposted to the EA Forum, there is a link in EA Forum comments section:

This link just goes to the top of the LessWrong version of the post and not to the comments. I think either the text should be changed or the link go to the comments section.

In this recent post from Oscar Delaney they got the following result (sadly doesn't go up to 10%, and in the linked spreadsheet the numbers seem hardcoded)

cost-effectiveness vs discount rate

Top three are Hellen Keller International (0.122), Sightsavers (0.095), AMF (0.062) 

(minor point that might help other confused people)

I had to google CMO  (which I found to mean Chief Marketing Officer) and also thought that BOAS might be an acronym - but found on your website

BOAS means good in Portuguese, clearly explaining what we do in only four letters! 

1
Vincent van der Holst
6mo
Thanks Tristan! I've added the full meaning in the text to avoid confusion and also added BOAS means good. 

Increasing/decreasing one's AGI timelines decrease/increase the importance [1] of non-AGI existential risks because there is more/less time for them to occur[2].

Further, as time passes and we get closer to AGI, the importance of non-AI x-risk decreases relative to AI x-risk. This is a particular case of the above claim.

  1. ^

    but not necessarily tractability & neglectedness

  2. ^

    If we think that nuclear/bio/climate/other work becomes irrelevant post-AGI, which seems very plausible to me

These seem neat! I'd recommend posting them to the EA Forum - maybe just as a shortform - as well as on your website so people can discuss the thoughts you've added (or maybe even posting the thoughts on your shortform with a link to your summary).

For a while I ran a podcast discussion meeting at my local group and I think summaries like this would have been super useful to send to people who didn't want to / have time to listen. As a bonus - though maybe too much effort - would be generating discussion prompts based on the episode. 

1
Trish
7mo
That's a good idea! I'll try posting the Will MacAskill interview later since that one is relatively recent. 

This looks exciting!

The application form link doesn't currently work. 

1
Anne Nganga
8mo
Hi Tristan, thanks for noting that. We're working to add a working link. It should be up in no time.

I highly recommend Nick Bostrom's  working paper Base Camp for Mt. Ethics.

Some excerpts on the idea of the cosmic host that I liked most:

34. At the highest level might be some normative structure established by what we may term the cosmic host. This refers to the entity or set of entities whose preferences and concordats dominate at the largest scale, i.e. that of the cosmos (by which I mean to include the multiverse and whatever else is contained in the totality of existence). It might conceivably consist of, for example, galactic civilizations, simu

... (read more)

I've been building a model to calculate the optimal spending schedule on AGI safety and am looking for volunteers to run user experience testing.
 

Let me know via DM on the forum or email if you're interested  :-) 

 The only requirements are (1) to be happy to call & share your screen for ~20 to ~60 minutes while you use the model  (a Colab notebook which runs in your browser) and (2) some interest in AI safety strategy (but certainly no expertise necessary)

 

I was also not sure how the strong votes worked, but found a description from four years ago here. I'm not sure if the system's in date.

Normal votes (one click) will be worth

  • 3 points – if you have 25,000 karma or more
  • 2 points – if you have 1,000 karma
  • 1 point  – if you have 0 karma

Strong Votes (click and hold) will be worth

  • 16 points (maximum) – if you have 500,000 karma
  • 15 points – 250,000
  • 14 points – 175,000
  • 13 points – 100,000
  • 12 points – 75,000
  • 11 points – 50,000
  • 10 points – 25,000
  • 9 points  – 10,000
  • 8 points  – 5,000
  • 7 points  – 2,500
  • 6 points
... (read more)
2
Lukas Finnveden
9mo
I think that's right other than that weak upvotes never become worth 3 points anymore (although this doesn't matter on the EA forum, given that no one has 25,000 karma), based on this lesswrong github file [https://github.com/ForumMagnum/ForumMagnum/blob/devel/packages/lesswrong/lib/voting/voteTypes.ts] linked from the LW FAQ [https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq#What_s_the_mapping_between_users__karma_and_voting_power_].
-5
titotal
9mo
Kirsten
9mo110

That seems accurate to me, my normal upvote is +2 and my strong upvote is +8.

Thanks for writing this! I think you're right that if you buy the Doomsday argument (or assumptions that lead to it) then we should update against worlds with 10^50 future humans and towards worlds with Doom-soon.

However, you write

My take is that the Doomsday Argument is ... but it follows from the assumptions outlined

which I don't think is true. For example, your assumptions seem equally compatible with the self-indication assumption (SIA) that doesn't predict Doom-soon.[1]

I think a lot of confusions in anthropics go away when we convert probability quest... (read more)

4
iporphyry
9mo
Thanks for the links!  They were interesting and I'm happy that philosophers, including ones close to EA, are trying to grapple with these questions.  I was confused by SIA, and found that I agree with Bostrom's critique of it [https://ora.ox.ac.uk/objects/uuid:88896e88-88b8-4bb3-97db-2f4da16ee5f9/files/m5fe31a1cce4eec8107d9a2236ecafbfa] much more than with the argument itself. The changes to the prior it proposes seem ad hoc, and I don't understand how to motivate them. Let me know if you know how to motivate them (without a posteriori arguments that they - essentially by definition - cancel the update terms in the DA). It also seems to me to quickly lead to infinite expectations if taken at face value, unless there is a way to consistently avoid this issue by avoiding some kind of upper bound on population? Anthropic decision theory seems more interesting to me, though I haven't had a chance to try to understand it yet. I'll take a look at the paper you linked when I get a chance. 

Thanks for your response Robin.

I stand by the claim that both (updating on the time remaining) and (considering our typicality among all civilizations) is an error in anthropic reasoning, but agree there are non-time remaining reasons reasons to expect  (e.g. by looking at steps on the evolution to intelligent life and reasoning about their difficulties). I think my ignorance based prior on  was naive for not considering this. 

I will address the issue of the compatibility of high  and high  by look... (read more)

Answer by Tristan CookAug 18, 2022100

I recently wrote about how  AGI timelines change the relative value of 'slow' acting neartermist interventions relative to 'fast' acting neartermist interventions.

It seems to me that EAs in other cause areas mostly ignore this, though I haven't looked into this too hard. 

My (very rough) understanding of Open Philanthropy's worldview diversification approach is that  the Global Health and Wellbeing focus area team operates on both (potentially) different values and  epistemic approaches to the Longtermism focus area team. The epistemic a... (read more)

Thanks for running the survey, I'm looking forward to seeing results!

I've filled out the form but find some of the potential arguments problematic. It could be worth to seeing how persuasive others find these arguments but I would be hesitant to promote arguments that don't seem robust. In general, I think more disjunctive arguments work well.

For example, (being somewhat nitpicky):

Everyone you know and love would suffer and die tragically.  

Some existential catastrophes could happen painlessly and quickly .

We would destroy the universe's only chance a

... (read more)
2
Wim
10mo
Thanks a lot for your thoughtful feedback!  I share the hesitancy around promoting arguments that don’t seem robust. To keep the form short, I only tried to communicate the thrust of the arguments. There are stronger and more detailed versions of most of them, which I plan on using. In the cases you pointed to: Some existential risks could definitely happen rather painlessly. But some could also happen painfully, so while the argument is perhaps not all encompassing, I think it still stands. Nevertheless, I’ll change it to something more like “you and everyone you know and love will die prematurely.” Other intelligent life is definitely a possibility, but even if it’s a reality, I think we can still consider ourselves cosmically significant. I’ll use a less objectionable version of this argument like “... destroy what could be the universe’s only chance…”  I got the co-benefits argument from this paper [https://doi.org/10.1016/j.futures.2015.03.001], which lists a bunch of co-benefits of GCR work, one of which I could swap the “better healthcare infrastructure bit.” I’ll try to get a few more opinions on this.  In any case, thanks again for your comment—I hadn’t considered some of the objections you pointed out!

I definitely agree people should be thinking about this! I wrote about something similar last week :-)

1
Samin
10mo
Awesome! I didn't consider the spending speed here. It highlights another important part of the analysis one should make when considering neartermist donations conditional on the short timelines. Dependent on humanity solving alignment, you not only want to spend the money before a superintelligence appears but also might maximize the impact by, e.g., delaying deaths until then
  • Is there a better word than 'sustenance' for outcomes where humanity does not suffer a global catastrophe?

There is some discussion here about such a term
 

1
Conor Barnes
10mo
This isn't exactly what I'm looking for (though I do think that concept needs a word).    The way I'm conceptualizing it right now is that there are three non-existential outcomes: 1. Catastrophe 2. Sustenance  / Survival 3. Flourishing  If you look at Toby Ord's prediction, he includes a number for flourishing, which is great. There isn't a matching prediction in the Ragnarok series, so I've squeezed 2 and 3 together as a "non-catastrophe" category.  

Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn't have a big impact on calculations like the ones you describe?

I agree with Thomas Kwa on this

There are a couple views by which neartermism is still worthwhile even if there's a large chance (like 50%) that we get AGI soon -- ...

I think neartermist causes are worthwhile in their own right, but think some interventions are less exciting when (in my mind) most of the benefits are on track to come after AGI. 

The idea that a n

... (read more)

Thanks for writing the post :-)

I think I'm confused that I expected the post (going by the title) to say something like "even if you think AI risk by year Y is X% or greater, you maybe shouldn't  change your life plans too much" but instead you're saying "AI risk might be lower than you think, and at a low level it doesn't affect your plans much" and then give some good considerations for potentially lower AI x-risk. 

You can react to images without text. But you need to tap on the side of the image, since tapping on the image itself maximizes it

Thanks, this is useful to know!

2x speed on voice messages

Just tested, and Signal in fact has this feature.

I'd also add in Telegram's favour

  • Web based client (https://web.telegram.org/) whereas Signal requires an installed app for some (frustrating) reason

and in Signal's favour

  • Any emoji reaction available (Telegram you have to pay for extra reacts) [this point leads me to worry Telegram will become more out-to-get-me over time]
  • Less weird behaviour (e.g. in Telegram, I can't react to images that are sent without text & in some old group chats I can't react to anything)

 

(I am neith... (read more)

2
Pablo
10mo
You can react to images without text. But you need to tap on the side of the image, since tapping on the image itself maximizes it. (I agree this behavior is somewhat unintuitive.) The absent reactions in old chats is because admins have the option of allowing or disallowing reactions, and since the group chats were created before reactions were introduced, Telegram doesn't assume admins agreed to allow them; instead, they have to be enabled manually.

This LessWrong post had some good discussion about some of the same ideas :-)

An advanced civilization from outer space could easily colonize our planet and enslave us as Columbus enslaved the Indigenous tribes of the Americas

I  think this is unlikely, since my guess is that (if civilization continues on Earth) we'll reach technological maturity in a much shorter time than we expect to meet aliens   (I consider the time until we meet aliens here).

Thanks for putting this together!

The list of people on the google form and the list in this post don't match (e.g. Seren Kell is on the post but not on the form and vice versa for David Manheim and Zachary Robinson)

2
Clifford
1y
Thanks for pointing this out - I’ve updated this now. Apologies to anyone who weren’t able to complete the form.

I'd add another benefit that I've not seen in the other answers: deciding on the curriculum and facilitating yourself get you to engage (critically) with a lot with EA material. Especially for the former you have to think about the EA idea-space and work out a path through it all for fellows.

I helped create a fellowship curriculum (mostly a hybrid of two existing curricula iirc) before there were virtual programs or and this definitely got me more involved with EA. Of course, there may be a trade-off in quality. 

I agree with what you say, though would note

(1) maybe doom should be disambiguated between  "the short-lived simulation that I am in is turned of"-doom (which I can't really observe) and "the basement reality Earth I am in is turned into paperclips by an unaligned AGI"-type doom.

(2) conditioning on me being in at least one short-lived simulation, if the multiverse is sufficiently large and the simulation containing me is sufficiently 'lawful' then I may also expect there to be basement reality copies of me too. In this case,  doom is implied for (what I would guess is) most exact copies of me.

3
Lukas Finnveden
1y
Yup, I agree the disambiguation is good. In aliens-context, it's even useful to disambiguate those types of doom from "Intelligence never leaves the basement reality Earth I am on"-doom. Since paperclippers probably would become grabby.

Thanks for this post! I've been meaning to write something similar, and have glad you have :-)

I agree with your claim that most observers like us (who believe they are at the hinge of history) are in (short-lived) simulations. Brian Tomasik discusses how this marginally makes one value interventions with short-term effects. 

In particular, if you think the simulations won't include other moral patients simulated to a high resolution (e.g. Tomasik suggests this may be the case for wild animals in remote places), you would instrumentally care less about ... (read more)

2
PaulCousens
1y
Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be running a simulation of us at this critical period of history to see what would have lead to counterfactual histories in which we adopted longtermism and thus protected ourselves from them. They would use these simulations to be better prepared for humans that might be evolving or have evolved in distant parts of the universe that they haven't accessed yet. Or maybe they still enslave a small or large portion of humanity, and are using the simulations to determine whether it is feasible or worthwhile to let us free again, or even whether it is safe for them to let the remaining human prisoners continue living. In this case, hedonism would be more miserable.
3
Jordan Arel
1y
Thank you for this reply! Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses. And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as. Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more. I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI. It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calcu

This tool is impressive, thanks! I like the framing you use of safety as a race against capabilities, though think don't really know what it would look like to have "solved " AGI safety 20 years before AGI. I also appreciate all the assumptions being listed at the end of the page.

Some minor notes

  • the GitHub link in the webpage footer points to the wrong page
  • I think two of the prompts "How likely is it to work?" and "How much do you speed it up?" would be made clearer if "it" was replaced by AGI safety (if that is what it is referring to).
1
frib
1y
Thank you for the feedback. It's fixed now!

Thanks for this post! I used to do some voluntary university community building, and some of your insights definitely ring true to me, particularly the Alice example - I'm worried that I might have been the sort of facilitator to not return to the assumptions in fellowships I've facilitated.

A small note:

Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):

This EA Leaders Forum was nearly 3 years ago, and so talent gaps have possibly changed. There was a Meta Coordination Forum last year run ... (read more)

This definitely sounds like a better approach than mine, thanks for sharing! This will be useful for me for any future projects

Thanks for your questions and comments! I really appreciate someone reading through in such detail :-)

  • What is the highest probability of encountering aliens in the next 1000 years according to reasonable choices once could make in your model?

SIA  (with no simulations) gives the nearest and most numerous aliens. 

My bullish prior (which has a priori has 80% credence in us not being alone) with SIA and the assumption that grabby aliens are hiding gives a median of ~ chance in a grabby civilization reaching us in the next 1000 years.

I do... (read more)

2
kokotajlod
1y
Don't you mean 1-that?
Load more