This is a special post for quick takes by JordanStone. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think).
If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2.
Probability of s-risk
I agree that in a sufficiently large space civilization (that isn't controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds).
Let's unpack this:
Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming).
80k defines an s-risk as "something causing vastly more suffering than has existed on Earth so far". This could easily be "achieved" even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth's history (with only ~ 1 billion (10^9) years of animal life).
This would not necessarily mean that the whole galactic civ was morally net bad.
A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint.
My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you'd have to have insanely good "quality control" in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.
But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS)
Here is my idea how this could look like:
A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks.
The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion.
I could expand further on this idea if you'd like.
Point of no-return
I'm unsure about this. Possible such points:
a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted.
Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help).
The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.
probably near 100% if digital sentience is possible… it only takes one
Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.
Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part.
But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?
The same logic also applies to x-risks that affect a galactic civilisation:
all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)
Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.
Thanks. In the original quick take, you wrote "thousands of independent and technologically advanced colonies", but here you write "hundreds of millions".
If you think there's a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it's more like 1 in 100, and then thousands (or more) would make it extremely likely.
I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then.
I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.
Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:
All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us.
The Fermi Paradox: there don't seem to be any other civilizations in the observable universe. There are many explanations for the Fermi Paradox, but one additional explanation might be that whoever is simulating the universe created it for us, or they don't care about other civilizations, so haven't simulated them.
We seem to be really early on in human history. Only about 60 billion people have ever lived IIRC but we expect many trillions to live in the future. This can be explained by the Doomsday argument - that in fact we are in the time in human history where most people will live because we will soon go extinct. However, this phenomenon can also be explained by us living in a simulation - see next point.
Not only are we really early, but we seem to be living at a pivotal moment in human history that is super interesting. We are about to create intelligence greater than ourselves, expand into space, or probably all die. Like if any time in history were to be simulated, I think there's a high likelihood it would be now.
If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simulators just being interested in that particular thing. But in any case, that's still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed we're all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making.
But everyone I talk to seems to have a relaxed approach to it, like it's impossible to make any progress on this and that it couldn't possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:
We may be able to infer from the nature of the universe and the natural problems ahead of us what the simulators are looking to understand or gain from the simulation (or at least we might attach percentage likelihoods to different goals). Maybe there are good arguments to aim to please the simulators, or not. Maybe we end the simulation if there are end-conditions?
Being in a simulation gives some weight to the probability that aliens exist (they probably have a lower probability of existing if we are in a simulation), which helps with long-term grand planning. Like, we wouldn't need to worry about integrating defenses against alien attacks or engaging in acausal trade with aliens.
We can disregard arguments like The Doomsday Argument, lowering our p(doom)
Some questions I'd ask is:
How much effort have we put into figuring out if there is something decision-relevant to do about this from a moral impact perspective? How much effort should we put into this?
How much effort has gone into figuring out if we are, in fact, in a simulation, using empiricism? What might we expect to see in a simulated universe vs a real world? How we can we search for and detect that?
Overall, this does sounds nuts to me and it probably shouldn't go further than this quick take, but I do feel like there could be something here, and it's probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and I'd be interested in hearing thoughts (especially a good counterargument to working on this so I don't have to think about it anymore) or learning about past efforts.
All the things you mentioned aren’t uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.)
The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis.
Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test people’s faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isn’t really there. (I also think it’s just not true that this is the most interesting point in history.)
Similarly, the book of Genesis says that God created humans in his image. Maybe he didn’t create aliens with high-tech civilizations because he’s only interested in beings with high technology made in his image.
It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis.
You might think the simulation hypothesis is preferable because it’s a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulators’ reality, maybe the fundamental relationship between consciousness and physical phenomena such as matter, energy, space, time, and physical forces is such that consciousness can directly, automatically shape physical phenomena to its will. If we observed this happening in our universe, we would describe this as magic or a miracle.
Whether you call them "simulators" or "God" or an "evil demon" or "Loki", and whether you call it a "simulation" or an "illusion" or a "dream", these are just different surface-level labels for substantially the same idea. If you stipulate laws of nature radically other than the ones we believe we have, what you’re talking about is supernatural.
If you try to assume that the physics and other laws of nature in the simulators’ reality is the same as in our perceived reality, then the simulation argument runs into a logical self-contradiction, as pointed out by the physicist Sean Carroll. Endlessly nested levels of simulation means computation in the original simulators’ reality will run out. Simulations at the bottom of the nested hierarchy, which don’t have enough computation to run still more simulations inside them, will outnumber higher-level simulations. Since the simulation argument says, as one of its key premises, that in our perceived reality we will be able to create simulations of worlds or universes filled with many digital minds, but the simulation hypothesis implies this is actually impossible, then the simulation argument’s conclusion contradicts one of its premises.
There are other strong reasons to reject the simulation argument. Remember that a key premise is that we ourselves or our descendants will want to make simulations. Really? They’ll want to simulate the Holocaust, malaria, tsunamis, cancer, cluster headaches, car crashes, sudden infant death syndrome, and Guantanamo Bay? Why? On our ethical views today, we would not see this as permissible, but rather the most grievous evil. Why would our descendants feel differently?
Less strongly, computation is abundant in the universe but still finite. Why spend computation on creating digital minds inside simulations when there is always a trade-off between doing that and creating digital minds in our universe, i.e. the real world? If we or our descendants think marginally and hold as one of our highest goals to maximize the number of future lives with a good quality of life, using huge amounts of computation on simulations might be seen as going against that goal. Plus, there are endlessly more things we could do with our finite resource of computation, most we can’t imagine today. Where would creating simulations fall on the list?
You can argue that creating simulations would be a small fraction of overall resources. I’m not sure that’s actually true; I haven’t done the math. But just because something is a small fraction of overall resources doesn’t mean it will be likely be done. In an interstellar, transhumanist scenario, our descendants could create a diamond statue of Hatsune Miku the size of the solar system and this would take a tiny percentage of overall resources, but that doesn’t mean it will likely happen. The simulation argument specifically claims that making simulations of early 21st century Earth will interest our descendants more than alternative uses of resources. Why? Maybe they’ll be more interested in a million other things.
Overall, the simulation hypothesis is undisprovable but no more credible than an unlimited number of other undisprovable hypotheses. If something seems nuts, it probably is. Initially, you might not be able to point out the specific logical reasons it’s nuts. But that’s to be expected — the sort of paradoxes and thought experiments that get a lot of attention (that "go viral", so to speak) are the ones that are hard to immediately counterargue.
Philosophy is replete with oddball ideas that are hard to convincingly refute at first blush. The Chinese Room is a prime example. Another random example is the argument that utilitarianism is compatible with slavery. With enough time and attention, refutations may come. I don't think one's inability to immediately articulate the logical counterargument is a sign that an oddball idea is correct. It's just that thinking takes time and, usually, by the time an oddball idea reaches your desk, it's proven to be resistant to immediate refutation. So, trust that intuition that something is nuts.
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I've seen in quite a while, which was refreshing for my peace of mind.
That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn't seem that great, but it seems plausible at least?
Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.
Changing the simulation hypothesis from a simulation of a world full of people to a simulation of an individual throws the simulation argument out the window. Here is how Sean Carroll articulates the first three steps of the simulation argument:
We can easily imagine creating many simulated civilizations.
Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
The simulation argument doesn’t apply to you, as an individual. Unless you think that you, personally, are going to create a simulation of a world or an individual — which obviously you’re not.
Changing the simulation hypothesis from a world-scale simulation to an individual-scale simulation also doesn’t change the other arguments against the simulation hypothesis:
The bottoming out argument. This is the one from Sean Carroll. Even if we supposed you, personally, were going to create individual-scale simulations in the future, eventually a nesting cascade of such simulations would exhaust available computation in the top-level universe, i.e. the real universe. The bottom-level simulations within which no further simulations are possible would outnumber higher-level ones. The conclusion of the simulation argument contradicts a necessary premise.[1]
The ethical argument. It would be extremely unethical to imprison an individual in a simulation without their consent, especially a simulation with a significant amount of pain and suffering that the simulators are programming in. Would you create an individual-scale simulation even of an unrealistically pleasant life, let alone a life with significant pain and suffering? If we had the technology to do this today, I think it would be illegal. It would be analogous to false imprisonment, kidnapping, torture, or criminal child abuse (since you are creating this person).
The computational waste argument. The amount of computation required to make an individual-scale simulation would require at least as much computation as creating a digital mind in the real universe. In fact, it would require more computation, since you also to have to simulate the whole world around the individual, not just the individual themselves. If the simulators think marginally, they would prefer to use these resources to create a digital mind in the real universe or put them to some other, better use.
If the point of the simulation is to cater it to the individual’s preferences, we should ask:
a) Why isn’t this actually happening? Why is there so much unnecessary pain and suffering and unpleasantness in every individual’s life? Why simulate the covid-19 pandemic?
b) Why not cater to the individual’s fundamental and overriding preference not to be in a simulation?
c) Why not put these resources toward any number of superior uses that must surely exist?[2]
Perhaps most importantly, changing the simulation hypothesis from world-scale to individual-scale doesn’t change perhaps the most powerful counterargument to the simulation hypothesis:
The unlimited arbitrary, undisprovable hypotheses argument. There is no reason to think the simulation hypothesis makes any more sense or is any more likely to be true than the hypothesis that the world you perceive is an illusion created by an evil demon or a trickster deity like Loki. There are an unlimited number of equally arbitrary and equally unjustified hypotheses of this type that could be generated. In my previous comment, I argued that versions of the simulation hypotheses in which the laws of physics or laws of nature are radically different in the real universe than in the simulation are supernatural hypotheses. Versions of the simulation hypothesis that assume real universe physics is the same as simulation physics suffer from the bottoming out argument and the computational waste argument. So, either way, the simulation hypothesis should be rejected. (Also, whether the simulation has real universe physics or not, the ethical argument applies — another reason to reject it.)
This argument also calls into question why we should think simulation physics is the same as real universe physics, i.e. why we should think the simulation hypothesis makes more sense as a naturalistic hypothesis than a supernatural hypothesis. The simulation hypothesis leans a lot on the idea that humans or post-humans in our hypothetical future will want to create “ancestor simulations”, i.e. realistic simulations of the simulators’ past, which is our present. If there were simulations, why would ancestor simulations be the most common type? Fantasy novels are about equally popular as historical fiction or non-fiction books about history. Would simulations skew toward historical realism significantly more than books currently do? Why not simulate worlds with magic or other supernatural phenomena? (Maybe we should conclude that, since this is more interesting, ghosts probably exist in our simulation. Maybe God is simulated too?) The “ancestor simulation” idea is doing a lot of heavy lifting; it’s not clear that this is in any way a justifiable assumption rather than an arbitrary one. The more I dig into the reasoning behind the simulation hypothesis, the more it feels like Calvinball.[3]
The individual-scale simulation hypothesis also introduces new problems that are unique to it:
Simulation of other minds. If you wanted to build a robot that could perfectly simulate the humans you know best, the underlying software would need to be a digital mind. Since, on the individual-scale simulation hypothesis, you are a digital mind, then the other minds in the simulation — at least the ones you know well — are as real as you are. You could try to argue that these other minds only need to be partially simulated. For example, the mind simulations don’t need to be running when you aren’t observing or interacting with these people. But then why don’t these people report memory gaps? If the answer is that the simulation fills in the gaps with false memories, what process continually generates new false memories? Why would this process be less computationally expensive than just running the simulation normally? (You could also try to say that consciousness is some kind of switch that can be flipped on or off for some simulations but not others. But I can’t think of any theory of consciousness this would be compatible with, and it’s a problem for the individual-scale simulation hypothesis if it just starts making stuff up ad hoc to fit the hypothesis.)
If we decide that at least the people you know well must be fully simulated, in the same way you are, then what about the people they know well? What about the people who they know well know well? If everyone in the world is connected through six degrees of separation or fewer, then it seems like individual-scale simulations are actually impossible and all simulations must be world-scale simulations.
Abandoning the simulation of history at large scale. Individual-scale simulations don’t provide the same informational value that world-scale simulations might. When people talk about why “ancestor simulations” would supposedly be valuable or desired, they usually appeal to the notion of simulating historical events on a large scale. This obviously wouldn’t apply to individual-scale simulations. To the extent credence toward the simulation hypothesis depends on this, an individual-scale simulation hypothesis may be even less credible than a world-scale simulation hypothesis.
The Wikipedia page on the simulation hypothesis notes that it’s a contemporary twist on a centuries-old if not millennia-old idea. We’ve replaced dreams and evil demons with computers, but the underlying idea is largely the same. The reasons to reject it are largely the same, although the simulation argument has some unique weaknesses. That page is a good resource for finding still more arguments against the simulation hypothesis.[4]
Carroll, who is a physicist and cosmologist, also criticizes the anthropic reasoning of the simulation argument. I recommend reading his post, it’s short and well-written.
You could try to argue that, despite society’s best efforts, it will be impossible to supress a large number of simulations from being created. Pursuing this line of argument re quires speculating about the specific details of a distant, transhuman or post-human future. Would an individual creating a simulation be more like an individual today operating a meth lab or launching a nuclear ICBM? I’m not sure we can know the answer to this question. If dangerous or banned technologies can’t be controlled, what does this say about existential risk? Will far future, post-human terrorists be able to deploy doomsday devices? If so, that would undermine the simulation argument. (Will post-humans even have the desire to be terrorists, or is that a defect of humanity?)
Related to this are various arguments that the simulation argument is self-defeating. We infer things about the real universe from our perceived universe. We then conclude that our perceived universe is a simulation. But, if it is, this undermines our ability to infer anything about the real universe from our perceived universe. In fact, this undermines the inference that our perceived universe is a simulation within a real universe. So, the simulation argument defeats itself.
In addition to all the above, I would be curious to hear empirical, scientific arguments about the amount of computation that might be required for world-scale simulations, which would be partly applicable to individual-scale simulations. Obviously, our universe can’t run a full-scale, one-to-one simulation of our universe with perfect fidelity — that would require more computation, matter, and energy than our universe has. If you only simulate the solar system with perfect fidelity, you can pare that down a lot. You can make other assumptions to pare down the computation required. It’s much less important than all the arguments and considerations described above, but if we get a better understanding of approximately how difficult or costly a world-scale simulation might be, that could help put some considerations like computational waste in perspective.
I would not describe the finetuning argument and the Fermi paradox as strong evidence in favour of the simulation hypothesis. I would instead say that they are open questions for which a lot of different explanations have been proposed, with the simulation offering only one of many possible resolutions.
As to the "importance" argument, we shouldn't count speculative future events as evidence of the importance of now. I would say the mid-20th century was more important than today, because that's the closest we ever got to nuclear annihilation (plus like, WW2).
I've thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While it's possible that you or I are alone in the simulation, we can't, realistically, know this. We can't know with much certainty that the apparently sentient beings who share our world aren't actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming you're a Utilitarian), or have rights/autonomy to be respected, etc.
We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for people's minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, they'll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.
Given all this, I don't see the point in trying to defy them or doing really anything differently than what you'd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.
If we're alone in the sim, then it doesn't matter what we do anyway, so I focus on the possibility that we aren't alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.
At least, that's the way I see things right now. Your mileage may vary.
Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved
It should serve as a hub to leverage the size of the EA community when it's needed.
I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all).
I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:
Asteroid Impacts
Building International Collaborations
Monitoring Nuclear Weapons Testing
Monitoring Climate Change Impacts
Planetary Protection from Mars Sample Return
Space Colonisation
Cosmic Threats (supernovae, gamma-ray bursts, solar flares)
The Overview Effect
Astrobiology and Longtermism
I think this would be an online webinar series. Would this be something people would be interested in?
I have written this post introducing space and existential risk and this post on cosmic threats, and I've come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)
(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively.
(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere.
(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.
(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community.
(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective.
(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful.
(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value.
The space industry is well-funded and already cares a lot about demonstrating impact (using a broader definition of impact than EA) to justify its funding, so (a)-(c) might be possible with industry support, and to some extent already exists.
I think the overarching story behind (f) is relatively uncomplicated particularly in the context of ongoing trade between Russia and Ukraine-supporters over oil etc : Roscosmos continued to collaborate with NASA et al on stuff like ISS because agreements remained in place and were too critical to suspend. Russia was never actually part of ESA and I suspect many people would have preferred it if Roscosmos was kicked off projects like ExoMars earlier. Probably helps that the engineers and cosmonauts on both sides are likely a good deal more levelheaded than Dmitry Rogozhin, but I don't think we'll hear what went on behind closed doors for a while...
Cosmic threats - what are they, how are they currently managed, and what work is needed in this area. Cosmic threats include asteroid impacts, solar flares, supernovae, gamma-ray bursts, aliens, rogue planets, pulsar beams, and the Kessler Syndrome. I think it would be useful to provide a summary of how cosmic threats are handled, and determine their importance relative to other existential threats.
Lessons learned from the space community. The space community has been very open with data sharing - the utility of this for tackling climate change, nuclear threats, ecological collapse, animal welfare, and global health and development cannot be understated. I may include perspective shifts here, provided by views of Earth from above and the limitless potential that space shows us.
How to access the space community's expertise, technology, and resources to tackle existential threats.
The role of the space community in global politics. Space has a big role in preventing great power conflicts and building international institutions and connections. With the space community growing a lot recently, I'd like to provide a briefing on the role of space internationally to help people who are working on policy and war.
Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven't seen much on space on the forum (apart from on space governance), so it would be something new.
Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes.
Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat
This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles.
I searched google for "gain of function UK" and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required.
Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks:
If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2.
Would appreciate some thoughts.
Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.
Interesting ideas! I've read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts:
But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.
How to create the Governance Structure (GS) Here is my idea how this could look like: A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks. The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion. I could expand further on this idea if you'd like.
Point of no-return I'm unsure about this. Possible such points: a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.
My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted. Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help).
The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.
Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.
Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part.
But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?
The same logic also applies to x-risks that affect a galactic civilisation:
Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.
Thanks. In the original quick take, you wrote "thousands of independent and technologically advanced colonies", but here you write "hundreds of millions".
If you think there's a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it's more like 1 in 100, and then thousands (or more) would make it extremely likely.
Yeah that's true.
I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then.
I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.
Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:
If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simulators just being interested in that particular thing. But in any case, that's still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed we're all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making.
But everyone I talk to seems to have a relaxed approach to it, like it's impossible to make any progress on this and that it couldn't possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:
Some questions I'd ask is:
Overall, this does sounds nuts to me and it probably shouldn't go further than this quick take, but I do feel like there could be something here, and it's probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and I'd be interested in hearing thoughts (especially a good counterargument to working on this so I don't have to think about it anymore) or learning about past efforts.
All the things you mentioned aren’t uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.)
The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis.
Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test people’s faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isn’t really there. (I also think it’s just not true that this is the most interesting point in history.)
Similarly, the book of Genesis says that God created humans in his image. Maybe he didn’t create aliens with high-tech civilizations because he’s only interested in beings with high technology made in his image.
It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis.
You might think the simulation hypothesis is preferable because it’s a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulators’ reality, maybe the fundamental relationship between consciousness and physical phenomena such as matter, energy, space, time, and physical forces is such that consciousness can directly, automatically shape physical phenomena to its will. If we observed this happening in our universe, we would describe this as magic or a miracle.
Whether you call them "simulators" or "God" or an "evil demon" or "Loki", and whether you call it a "simulation" or an "illusion" or a "dream", these are just different surface-level labels for substantially the same idea. If you stipulate laws of nature radically other than the ones we believe we have, what you’re talking about is supernatural.
If you try to assume that the physics and other laws of nature in the simulators’ reality is the same as in our perceived reality, then the simulation argument runs into a logical self-contradiction, as pointed out by the physicist Sean Carroll. Endlessly nested levels of simulation means computation in the original simulators’ reality will run out. Simulations at the bottom of the nested hierarchy, which don’t have enough computation to run still more simulations inside them, will outnumber higher-level simulations. Since the simulation argument says, as one of its key premises, that in our perceived reality we will be able to create simulations of worlds or universes filled with many digital minds, but the simulation hypothesis implies this is actually impossible, then the simulation argument’s conclusion contradicts one of its premises.
There are other strong reasons to reject the simulation argument. Remember that a key premise is that we ourselves or our descendants will want to make simulations. Really? They’ll want to simulate the Holocaust, malaria, tsunamis, cancer, cluster headaches, car crashes, sudden infant death syndrome, and Guantanamo Bay? Why? On our ethical views today, we would not see this as permissible, but rather the most grievous evil. Why would our descendants feel differently?
Less strongly, computation is abundant in the universe but still finite. Why spend computation on creating digital minds inside simulations when there is always a trade-off between doing that and creating digital minds in our universe, i.e. the real world? If we or our descendants think marginally and hold as one of our highest goals to maximize the number of future lives with a good quality of life, using huge amounts of computation on simulations might be seen as going against that goal. Plus, there are endlessly more things we could do with our finite resource of computation, most we can’t imagine today. Where would creating simulations fall on the list?
You can argue that creating simulations would be a small fraction of overall resources. I’m not sure that’s actually true; I haven’t done the math. But just because something is a small fraction of overall resources doesn’t mean it will be likely be done. In an interstellar, transhumanist scenario, our descendants could create a diamond statue of Hatsune Miku the size of the solar system and this would take a tiny percentage of overall resources, but that doesn’t mean it will likely happen. The simulation argument specifically claims that making simulations of early 21st century Earth will interest our descendants more than alternative uses of resources. Why? Maybe they’ll be more interested in a million other things.
Overall, the simulation hypothesis is undisprovable but no more credible than an unlimited number of other undisprovable hypotheses. If something seems nuts, it probably is. Initially, you might not be able to point out the specific logical reasons it’s nuts. But that’s to be expected — the sort of paradoxes and thought experiments that get a lot of attention (that "go viral", so to speak) are the ones that are hard to immediately counterargue.
Philosophy is replete with oddball ideas that are hard to convincingly refute at first blush. The Chinese Room is a prime example. Another random example is the argument that utilitarianism is compatible with slavery. With enough time and attention, refutations may come. I don't think one's inability to immediately articulate the logical counterargument is a sign that an oddball idea is correct. It's just that thinking takes time and, usually, by the time an oddball idea reaches your desk, it's proven to be resistant to immediate refutation. So, trust that intuition that something is nuts.
Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I've seen in quite a while, which was refreshing for my peace of mind.
That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn't seem that great, but it seems plausible at least?
Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.
Changing the simulation hypothesis from a simulation of a world full of people to a simulation of an individual throws the simulation argument out the window. Here is how Sean Carroll articulates the first three steps of the simulation argument:
The simulation argument doesn’t apply to you, as an individual. Unless you think that you, personally, are going to create a simulation of a world or an individual — which obviously you’re not.
Changing the simulation hypothesis from a world-scale simulation to an individual-scale simulation also doesn’t change the other arguments against the simulation hypothesis:
The bottoming out argument. This is the one from Sean Carroll. Even if we supposed you, personally, were going to create individual-scale simulations in the future, eventually a nesting cascade of such simulations would exhaust available computation in the top-level universe, i.e. the real universe. The bottom-level simulations within which no further simulations are possible would outnumber higher-level ones. The conclusion of the simulation argument contradicts a necessary premise.[1]
The ethical argument. It would be extremely unethical to imprison an individual in a simulation without their consent, especially a simulation with a significant amount of pain and suffering that the simulators are programming in. Would you create an individual-scale simulation even of an unrealistically pleasant life, let alone a life with significant pain and suffering? If we had the technology to do this today, I think it would be illegal. It would be analogous to false imprisonment, kidnapping, torture, or criminal child abuse (since you are creating this person).
The computational waste argument. The amount of computation required to make an individual-scale simulation would require at least as much computation as creating a digital mind in the real universe. In fact, it would require more computation, since you also to have to simulate the whole world around the individual, not just the individual themselves. If the simulators think marginally, they would prefer to use these resources to create a digital mind in the real universe or put them to some other, better use.
If the point of the simulation is to cater it to the individual’s preferences, we should ask:
a) Why isn’t this actually happening? Why is there so much unnecessary pain and suffering and unpleasantness in every individual’s life? Why simulate the covid-19 pandemic?
b) Why not cater to the individual’s fundamental and overriding preference not to be in a simulation?
c) Why not put these resources toward any number of superior uses that must surely exist?[2]
Perhaps most importantly, changing the simulation hypothesis from world-scale to individual-scale doesn’t change perhaps the most powerful counterargument to the simulation hypothesis:
The unlimited arbitrary, undisprovable hypotheses argument. There is no reason to think the simulation hypothesis makes any more sense or is any more likely to be true than the hypothesis that the world you perceive is an illusion created by an evil demon or a trickster deity like Loki. There are an unlimited number of equally arbitrary and equally unjustified hypotheses of this type that could be generated. In my previous comment, I argued that versions of the simulation hypotheses in which the laws of physics or laws of nature are radically different in the real universe than in the simulation are supernatural hypotheses. Versions of the simulation hypothesis that assume real universe physics is the same as simulation physics suffer from the bottoming out argument and the computational waste argument. So, either way, the simulation hypothesis should be rejected. (Also, whether the simulation has real universe physics or not, the ethical argument applies — another reason to reject it.)
This argument also calls into question why we should think simulation physics is the same as real universe physics, i.e. why we should think the simulation hypothesis makes more sense as a naturalistic hypothesis than a supernatural hypothesis. The simulation hypothesis leans a lot on the idea that humans or post-humans in our hypothetical future will want to create “ancestor simulations”, i.e. realistic simulations of the simulators’ past, which is our present. If there were simulations, why would ancestor simulations be the most common type? Fantasy novels are about equally popular as historical fiction or non-fiction books about history. Would simulations skew toward historical realism significantly more than books currently do? Why not simulate worlds with magic or other supernatural phenomena? (Maybe we should conclude that, since this is more interesting, ghosts probably exist in our simulation. Maybe God is simulated too?) The “ancestor simulation” idea is doing a lot of heavy lifting; it’s not clear that this is in any way a justifiable assumption rather than an arbitrary one. The more I dig into the reasoning behind the simulation hypothesis, the more it feels like Calvinball.[3]
The individual-scale simulation hypothesis also introduces new problems that are unique to it:
Simulation of other minds. If you wanted to build a robot that could perfectly simulate the humans you know best, the underlying software would need to be a digital mind. Since, on the individual-scale simulation hypothesis, you are a digital mind, then the other minds in the simulation — at least the ones you know well — are as real as you are. You could try to argue that these other minds only need to be partially simulated. For example, the mind simulations don’t need to be running when you aren’t observing or interacting with these people. But then why don’t these people report memory gaps? If the answer is that the simulation fills in the gaps with false memories, what process continually generates new false memories? Why would this process be less computationally expensive than just running the simulation normally? (You could also try to say that consciousness is some kind of switch that can be flipped on or off for some simulations but not others. But I can’t think of any theory of consciousness this would be compatible with, and it’s a problem for the individual-scale simulation hypothesis if it just starts making stuff up ad hoc to fit the hypothesis.)
If we decide that at least the people you know well must be fully simulated, in the same way you are, then what about the people they know well? What about the people who they know well know well? If everyone in the world is connected through six degrees of separation or fewer, then it seems like individual-scale simulations are actually impossible and all simulations must be world-scale simulations.
Abandoning the simulation of history at large scale. Individual-scale simulations don’t provide the same informational value that world-scale simulations might. When people talk about why “ancestor simulations” would supposedly be valuable or desired, they usually appeal to the notion of simulating historical events on a large scale. This obviously wouldn’t apply to individual-scale simulations. To the extent credence toward the simulation hypothesis depends on this, an individual-scale simulation hypothesis may be even less credible than a world-scale simulation hypothesis.
The Wikipedia page on the simulation hypothesis notes that it’s a contemporary twist on a centuries-old if not millennia-old idea. We’ve replaced dreams and evil demons with computers, but the underlying idea is largely the same. The reasons to reject it are largely the same, although the simulation argument has some unique weaknesses. That page is a good resource for finding still more arguments against the simulation hypothesis.[4]
Carroll, who is a physicist and cosmologist, also criticizes the anthropic reasoning of the simulation argument. I recommend reading his post, it’s short and well-written.
You could try to argue that, despite society’s best efforts, it will be impossible to supress a large number of simulations from being created. Pursuing this line of argument re quires speculating about the specific details of a distant, transhuman or post-human future. Would an individual creating a simulation be more like an individual today operating a meth lab or launching a nuclear ICBM? I’m not sure we can know the answer to this question. If dangerous or banned technologies can’t be controlled, what does this say about existential risk? Will far future, post-human terrorists be able to deploy doomsday devices? If so, that would undermine the simulation argument. (Will post-humans even have the desire to be terrorists, or is that a defect of humanity?)
Related to this are various arguments that the simulation argument is self-defeating. We infer things about the real universe from our perceived universe. We then conclude that our perceived universe is a simulation. But, if it is, this undermines our ability to infer anything about the real universe from our perceived universe. In fact, this undermines the inference that our perceived universe is a simulation within a real universe. So, the simulation argument defeats itself.
In addition to all the above, I would be curious to hear empirical, scientific arguments about the amount of computation that might be required for world-scale simulations, which would be partly applicable to individual-scale simulations. Obviously, our universe can’t run a full-scale, one-to-one simulation of our universe with perfect fidelity — that would require more computation, matter, and energy than our universe has. If you only simulate the solar system with perfect fidelity, you can pare that down a lot. You can make other assumptions to pare down the computation required. It’s much less important than all the arguments and considerations described above, but if we get a better understanding of approximately how difficult or costly a world-scale simulation might be, that could help put some considerations like computational waste in perspective.
I would not describe the finetuning argument and the Fermi paradox as strong evidence in favour of the simulation hypothesis. I would instead say that they are open questions for which a lot of different explanations have been proposed, with the simulation offering only one of many possible resolutions.
As to the "importance" argument, we shouldn't count speculative future events as evidence of the importance of now. I would say the mid-20th century was more important than today, because that's the closest we ever got to nuclear annihilation (plus like, WW2).
I've thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While it's possible that you or I are alone in the simulation, we can't, realistically, know this. We can't know with much certainty that the apparently sentient beings who share our world aren't actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming you're a Utilitarian), or have rights/autonomy to be respected, etc.
We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for people's minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, they'll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.
Given all this, I don't see the point in trying to defy them or doing really anything differently than what you'd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.
If we're alone in the sim, then it doesn't matter what we do anyway, so I focus on the possibility that we aren't alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.
At least, that's the way I see things right now. Your mileage may vary.
Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved
It should serve as a hub to leverage the size of the EA community when it's needed.
I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all).
I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:
I think this would be an online webinar series. Would this be something people would be interested in?
I have written this post introducing space and existential risk and this post on cosmic threats, and I've come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)
(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively.
(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere.
(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.
(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community.
(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective.
(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful.
(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value.
(d) might be interesting to read
The space industry is well-funded and already cares a lot about demonstrating impact (using a broader definition of impact than EA) to justify its funding, so (a)-(c) might be possible with industry support, and to some extent already exists.
I think the overarching story behind (f) is relatively uncomplicated particularly in the context of ongoing trade between Russia and Ukraine-supporters over oil etc : Roscosmos continued to collaborate with NASA et al on stuff like ISS because agreements remained in place and were too critical to suspend. Russia was never actually part of ESA and I suspect many people would have preferred it if Roscosmos was kicked off projects like ExoMars earlier. Probably helps that the engineers and cosmonauts on both sides are likely a good deal more levelheaded than Dmitry Rogozhin, but I don't think we'll hear what went on behind closed doors for a while...
I am a researcher in the space community and I recently wrote a post introducing the links between outer space and existential risk. I'm thinking about developing this into a sequence of posts on the topic. I plan to cover:
Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven't seen much on space on the forum (apart from on space governance), so it would be something new.
Hey Jordan, I work in the space sector and I'm also based in London. I am currently working on a Government project assessing the impact of space weather on UK critical national infrastructure. I've written a little on the existential risk of space weather, too, e.g. https://forum.effectivealtruism.org/posts/9gjc4ok4GfwuyRASL/cosmic-rays-could-cause-major-electronic-disruption-and-pose
I'll message you as it would be good to connect!
Hi Matt. Sorry I missed your post and thanks for getting in touch! Your research sounds very interesting, I've messaged you directly :)
Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes.
Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat
https://forum.effectivealtruism.org/events/cJnwCKtkNs6hc2MRp/panel-discussion-how-can-the-space-sector-overcome
This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles.
I searched google for "gain of function UK" and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required.
https://petition.parliament.uk/petitions/576773#:~:text=Closed%20petition%20Ban%20%E2%80%9CGain%20of,the%20consequences%20could%20be%20severe.
How did this happen? Should we try again?