"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"
One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.
For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:
For the purposes of this post, I'm not that interested in the delineating between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I'm not positing a strong position on mathematical platonism or consciousness dualism)
But what's interesting to me is how these different worlds ground morality/value, what some philosophers would call "axiology." When people try to solely ground morality in the first two worlds, and even more so when people try to ground morality in the first world alone[3], deep believers in all three worlds (which I think is most people, and most philosophers) think they're entirely missing the point! It seems almost self-evident that conscious experience is much more important than the arrangement of mere rocks, or bloodless abstract game theory of feeling-less zombies!
But are these the only 3 worlds? Is it possible to have other morally relevant worlds, and in particular worlds that will self-evidently seem so much more important than subjective experience if only we know about them?
Perhaps.
For example, (most) religious people believe they have an answer, :
Now I think the religious people are wrong about the world as we see it today. But do we have strong reason to think that the three worlds as we know them are the only ones left? I think no.
In particular, we have two distinct reasons to think future intelligences can discover other worlds:
A. AIs, including future AIs, will be a distinct type of mind(s) than human mind(s). Just as most people today believe that humans (and other animals) have qualia that present-day AIs do not have, we should also think it's plausible that different mental architectures in AI will allow them to have moral goods that we cannot experience or perhaps even conceive.
B. Superintelligences (likely digital intelligences, though in theory could also be our posthuman descendants) will be able to search for further moral goods. At some point in the future (if we don't all die first), it will become trivial to spend more brainpower than has ever existed in all of human science and philosophy combined to search for other sources of moral value. This can come from engineering unique environmental arrangements of matter, unique structures of minds, or something else entirely.
So one day our descendants may discover worlds five, six, and so on: sources of moral value qualitatively distinct and superior to what we have access to, in the same way that grounding morality purely in game theory or entropy feels foolish to most experiencing humans today.
If true, this is a big deal! [4]
This seems overall quite possible to me. But is it probable?
I don't have a good sense of high likely this all is. Trying to estimate it feels beyond my forecasting or philosophical competence. But it seems plausible enough, and interesting enough, that I wanted to bring it to people's attention, in case other people have ideas on how to extend it.
Appendix A:
Existing literature: This concept is widespread but undertheorized. Mill's qualitative distinction among pleasures can point us in this direction; Bostrom's "Letter from Utopia" is the most vivid articulation ("What I feel is as far beyond feelings as what I think is beyond thoughts"); Danaher (2021) coined "axiological possibility space"; Ord's The Precipice argues we have "barely begun the ascent" and our investigations of flourishing may be "like astronomy before telescopes." According to a search from Claude, Nagel, Jackson, and Chalmers "collectively demonstrate that the space of possible conscious experiences vastly exceeds human experience." Banks's concept of Subliming: where "the very ideas, the actual concepts of good, of fairness and of justice just ceased to matter", is the most philosophically precise depiction I've seen in science fiction.
[1] Though I've seen shades of it in academic philosophy, EA/longtermist writing, science fiction/fantasy, and discussions of religion
[2] This is disputed.
[3] eg entropy as the guiding factor of morality, a la Beff Jezos.
[4] And if false, but convincing enough to be an attractor state for our descendants, this will sadly also be a very big deal.
On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here:
The primary complaints:
IANAL etc. in my personal opinion #2 seems very clearcut as a common-language and precedent reading of these things. #1 also seems strong. Sources I randomly skimmed online thought #3-#5 had a good case too, but I don't have an independent view.
The DC complaint looks less meaty (and I didn't read it)
I think they were laughed at enough after the Wired article (from here and elsewhere) that maintaining the previous line was no longer tenable for them.
I also separately think their current stated position is more correct than the previous one, but I'm just observing that the incentives are a larger fraction of the story than what ppl might otherwise be reading them as.
This exact line was later used in the Anthropic lawsuit against the DoW/DoD:
Department officials have even expressed concerns about the consequences of losing access to Claude.30 Describing the dispute between Anthropic and the Department, one official stated that “[t]he only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.” 31
Which I think is further evidence for my original contention that saying this was a strategic error.
So I agree that humanity might just choose not to reach the stars. It seems unlikely to me that nobody (or nobody with sufficient resources) would want to do this post-AGI, but it's possible humanity as a whole prevents other people from expanding (eg worries about building independent power centers that might harm the safety of Earth, or spoilt negotiations, or more idiosyncratic factors).
This is not the most likely existential risk imo, but certainly one to be aware of.
That said, the 1960s-70s moon landing was a large net resource loss. Costed ~ half a percentage point of GDP (!) annually for multiple years and didn't get anything in return other than a few innovations and one-upping the Soviets. Seems like a pretty different story!
There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely.
Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you've colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process.
I don't think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don't think the intuitive "slow waves of colonization" model makes a lot of sense on a galactic scale.
I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc field really believes it.
Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe.
I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology.
My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first "serious" interstellar space probe, rather than do it as soon as possible[1].
Whether you can reach the furthest galaxies is determined by something like[2]:
total time to reach a galaxy = delay + distance/speed
So you want to delay and keep researching until the marginal speed gain from additional R&D time is lower than the marginal cost of the delay.
I don't have a sense of how long this is, but intuitively it feels more like decades and centuries, maybe even slightly longer, than months or years. The theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1/100 millionth of c [4].
For energy/resource reasons you might want to expand to nearby star systems first to send the fastest possible probes but note again that delay before sending your first probe is always at worst a constant amount of time. There's the possible exception of being able to accelerate R&D in other star systems, e.g. because you need multiple star systems of compute in order to do the R&D well. But this is trickier than it looks! The lightspeed communication barrier means sending information is slow, so you're really giving up a lot in terms of latency to use up more compute. A caveat here is that you might want your supercomputer to be bigger than the home system's resources. So maybe you want to capture a nearby star system to turn that into your core R&D department. Though that takes a while to build out, too.
Here are a few models of space colonization that I think are more likely:
I’m neither an astrophysicist nor in any other way a “real” space expert and I’ve spent less than a day thinking about the relevant dynamics, so let me know if you think I’m wrong or you have additional thoughts! Very happy to be corrected. :)
[1] Modulo other reasons for going faster, like worries about single-system x-risk, stagnation, meme wars etc. There are also other reasons to go slower, for example worries about interstellar x-risks/ vulnerable universe, wanting more value certainty and fear of value drift, being scared of aliens, etc.
[2] + relativistic effects and other cosmological effects that I don't understand. I never studied relativity but I'd be surprised if it changes the OOM calculus.
[3] where we predict additional research time yields diminishing returns relative to acting on current knowledge
[4] See also earlier work by Kennedy. Kennedy 2006's 'wait calculation' formalizes a version of this tradeoff for nearby stars and gets centuries-scale optimal delays, though his model doesn't consider the intergalactic case and has additional assumptions about transportation speeds that I’m unsure about.
Know Your Meme says it started off as video game jargon; my impression is that it's pretty common online outside of that.
I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.
I think this is a systematic mistake most of the time. It's true that your impact often routes through a small number of people. However, only some of the time would you know who the decisionmakers are ahead of time (eg X philanthropic fund should fund Y project, B regulator should loosen regulations in C domain), and have a plan for directly reaching them. For the other cases, you probably need to reach at minimum thousands of vaguely-related/vaguely-interested people before the ~5 most relevant people for your research would come across your research.
Furthermore, popularity has other advantages:
Now of course it's possible to aim too much for popularity, and Goodhart on that. For example, by focusing on research topics that's popular rather than important, or on research directions/framings that's memetically fit rather than correct. Obsessing over metrics can also be bad for having the space to explore newer and more confusing ideas.
Nonetheless, on balance I think most researchers should be aware of what makes their research popular and in part gravitate towards that. Maybe I think they should spend >10% of their time on publicizing their work (not including "proof of work" style paper writing, grant applications, etc), whereas instead many ppl seem to spend <5%.
[1] Academia (and for that matter, for-profit research within a company) has this problem less because usually your peer group and potential collaborators in your sub-subfield are more well-defined and known to you. Also, academics care less about impact. Even so, I think ppl are leaving impact (and possibly career success) on the table by not being more popular. Eg if you work in theoretical econ you should aspire to have your theories be applied by applied economists, if you work on the evolutionary dynamics of bees you should want to be read by people working on ants, if you work on themes in Renaissance art history you should aspire to be read by people studying Renaissance political philosophy, etc.
This means (imo) academics should be more willing to have academic blogs and Twitter threads, and tolerate (or even seek out) media coverage of their work.