Nikola

College Branch President @ Harvard EA
Pursuing an undergraduate degree

Comments
12

Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
 

  • If Earth-based life is the only intelligent life that will ever emerge, then humans + other earth life going extinct makes the EV of the future basically 0, aside from non-human Earth-based life optimizing the universe, which would probably be less than 10% of non-extinct-human EV, due to the fact that
    1. Humans being dead updates us towards other stuff eventually going extincts
    2. Many things have to go right for a species to evolve pro-social tendencies in the way humans did, meaning it might not happen before the Earth becomes uninhabitable
  •  This implies we should worry much more about X-Risks to all of Earth life (misaligned AI, nanotech) per unit of probablity than X risks to just humanity, due to the fact that all of Earth life dying would mean that the universe is permanently sterilized of value, while some other species picking up the torch would preserve some possibility of universe optimization, especially in worlds where CEV is very consistent across Earth life

 

  • If Earth-based life is not the only intelligent life that will ever emerge, then the stakes become much lower because we'll only get our allotted bubble anyways, meaning that
    1. If humans go extinct, then some alien species will eventually grab our part of space
    2. Then the EV of the universe (that we can affect) is roughly bounded by how much big our bubble is (even including trade, becasue the most sensible portion of  a trade deal is proportional to bubble size), which is probably on the scale of tens of thousands to billions of light-years(?) wide, bounding our portion of the universe to probably less than 1% of the non-alien scenario
  • This implies that we should care roughly equally about human-bounded and Earth-bounded X-risks per unit of probability, as there probably wouldn't be time for another Earth species to pick up the torch between the time humans go extinct and the time Earth makes contact with aliens (at which point it's game over)

Nice to see new people in the Balkans! I'd be down to chat sometime about how EA Croatia started off :)

Building on the space theme, I like Earthrise, as it has very hopeful vibes, but also points to the famous picture that highlights the fragility and preciousness of earth-based life.

Thank you for writing this. I've been repeating this point to many people and now I can point them to this post.

For context, for college-aged people in the US, the two most likely causes of death in a given year are suicide and vehicle accidents, both at around 1 in 6000. Estimates of global nuclear war in a given year are comparable to both of these. Given a AGI timeline of 50% by 2045, it's quite hard to distribute that 50% over ~20 years and assign much less than 1 in 6000 to the next 365 days. Meaning that even right now, in 2022, existential risks are high up on the list of most probable causes of death for college aged-people. (assuming P(death|AGI) is >0.1 in the next few years)

One project I've been thinking about is making (or having someone else make) a medical infographic that takes existential risks seriously, and ranks them accurately as some of the highest probability causes of death (per year) for college-aged people. I'm worried about this seeming too preachy/weird to people who don't buy the estimates though.

Strongly agree, fostering a culture of openmindedness (love the example from Robi) and the expectation of updating from more experienced EAs seems good. In the updating case, I think making sure that everyone knows what "updating" means is a priority (sounds pretty weird otherwise). Maybe we should talk about introductory Bayesian probability in fellowships and retreats.

Great post, Joshua! I mostly second all of these points.

I'd add another hot take:

Both the return of fellowships and retreats mostly tracks one variable, and that is time participants spend in small (eg. one-on-one) interactions with highly engaged EAs. Retreats are good mostly because they're a very efficient way to have a lot of this interaction in a small period of time.  More in this here.

I agree, to clarify, my claim assumes infinite patience.

Skipping time to determine net positivity of experience

[inspired by a conversation with Robi Rahman]

Imagine that it’s possible to skip certain periods of time in your life. All this means is you don’t experience them, but you come out of them having the same memories as if you did experience them.

Now imagine that, after you live whatever life you would have lived, there’s another certain 5000 years of very good life that you’ll live that’s undoubtedly net positive. My claim is that, any moments in your life you’d prefer to “skip” are moments in which your life is net negative.

I wonder how many moments you'd skip?

I think that it's relevant that, for some veg*ns, it would take more energy (emotional energy/willpower) not to be veg*n. For instance, having seen some documentaries, I am repulsed by the idea of eating meat due to the sheer emotional force of participating in the atrocities I saw. Maybe this is an indicator that I should spend more time trying to align my emotions to my ethical beliefs (which would, without the strong emotional force, point towards me eating animal products to save energy), but I'm not sure if that's worth the effort.

Maybe this implies that we shouldn't recommend documentaries on animal farming to EAs because it would lead to emotional bias against eating animal products? But I'm pretty sure seeing those documentaries expanded my moral circle in a very good way.

Thanks, you're completely right, that sounds negative. Changed the title to "Helping newcomers be more objective with career choice", which probably gets across what we're trying to get across better.

Load More