L

Linch

@ Forethought
27950 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2930

I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.

I think this is a systematic mistake most of the time. It's true that your impact often routes through a small number of people. However, only some of the time would you know who the decisionmakers are ahead of time (eg X philanthropic fund should fund Y project, B regulator should loosen regulations in C domain), and have a plan for directly reaching them. For the other cases, you probably need to reach at minimum thousands of vaguely-related/vaguely-interested people before the ~5 most relevant people for your research would come across your research.

 

Furthermore, popularity has other advantages:

  • If many people read your writing, it's more likely someone else can discover empirical mistakes, logical errors, or (on the upside) unexpected connections. If 100 randos read your article, it's unlikely any of them can discover a critical mistake. This becomes much more likely at 10,000+ randos.
  • writing for a semi-popular audience forces some degree of simplicity and a different type of rigor. If you write for "informed people" or "vaguely related experts" as opposed to people in your subsubfield, you have less shared assumptions, and are forced to use less jargon and be more precise about your claims.
  • Recruitment and talent attraction. If your research agenda is good, you want other people to work on it. Popular writing is one of the best ways to get other smart people (with or without directly relevant expertise) to notice a problem exists and decide to dedicate time to it.
  • Personal career benefits: funders you haven't heard of before (and who haven't heard of you) are more likely to discover you and proactively offer funding. Employers working on related fields, or just who like your reasoning style, are more likely to actively recruit you.

Now of course it's possible to aim too much for popularity, and Goodhart on that. For example, by focusing on research topics that's popular rather than important, or on research directions/framings that's memetically fit rather than correct. Obsessing over metrics can also be bad for having the space to explore newer and more confusing ideas.

Nonetheless, on balance I think most researchers should be aware of what makes their research popular and in part gravitate towards that. Maybe I think they should spend >10% of their time on publicizing their work (not including "proof of work" style paper writing, grant applications, etc), whereas instead many ppl seem to spend <5%.

[1] Academia (and for that matter, for-profit research within a company) has this problem less because usually your peer group and potential collaborators in your sub-subfield are more well-defined and known to you. Also, academics care less about impact. Even so, I think ppl are leaving impact (and possibly career success) on the table by not being more popular. Eg if you work in theoretical econ you should aspire to have your theories be applied by applied economists, if you work on the evolutionary dynamics of bees you should want to be read by people working on ants, if you work on themes in Renaissance art history you should aspire to be read by people studying Renaissance political philosophy, etc.

This means (imo) academics should be more willing to have academic blogs and Twitter threads, and tolerate (or even seek out) media coverage of their work.

Linch
26
1
0
3

"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"

One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.

For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:

  1. The physical world: matter, energy, atoms, stars, cells. An detached external observer might think that's all there is to our universe.
  2. The mathematical world. Mathematics, logic, abstract structure, rationality, "natural laws." Even many otherwise-strict "materialists" can see how the mathematical world is conceptually distinct from the physical one: mathematical truths seem conceptually different and perhaps deeper than mere physical facts. And if you're a robot/present-day LLM, you might just live in the first two worlds[2]. Some Kantians try to ground morality entirely within this world, in the logic of cooperation and strategic interaction.
  3. The world of consciousness. The experiential realm. Qualia, subjective experience, "what it's like to be me." Most secular moral philosophers treat this as where the real moral action is. A pure hedonic utilitarian might think conscious experience is the only thing that matters, but even other moral philosophies would consider conscious experience extremely important (usually the most important).

For the purposes of this post, I'm not that interested in the delineating between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I'm not positing a strong position on mathematical platonism or consciousness dualism)

But what's interesting to me is how these different worlds ground morality/value, what some philosophers would call "axiology." When people try to solely ground morality in the first two worlds, and even more so when people try to ground morality in the first world alone[3], deep believers in all three worlds (which I think is most people, and most philosophers) think they're entirely missing the point! It seems almost self-evident that conscious experience is much more important than the arrangement of mere rocks, or bloodless abstract game theory of feeling-less zombies!

But are these the only 3 worlds? Is it possible to have other morally relevant worlds, and in particular worlds that will self-evidently seem so much more important than subjective experience if only we know about them?

Perhaps.

For example, (most) religious people believe they have an answer, :

  1. The supernatural world. The world of spirits, Gods, heavens and hells. Religious traditions often claim that divine or transcendent value is qualitatively, not just quantitatively, superior to natural goods. Saying that "heaven is infinite bliss" is a secular/materialist approximation of something much deeper. (Other handles: the ineffable, the sublime)

Now I think the religious people are wrong about the world as we see it today. But do we have strong reason to think that the three worlds as we know them are the only ones left? I think no.

In particular, we have two distinct reasons to think future intelligences can discover other worlds:

A. AIs, including future AIs, will be a distinct type of mind(s) than human mind(s). Just as most people today believe that humans (and other animals) have qualia that present-day AIs do not have, we should also think it's plausible that different mental architectures in AI will allow them to have moral goods that we cannot experience or perhaps even conceive.

B. Superintelligences (likely digital intelligences, though in theory could also be our posthuman descendants) will be able to search for further moral goods. At some point in the future (if we don't all die first), it will become trivial to spend more brainpower than has ever existed in all of human science and philosophy combined to search for other sources of moral value. This can come from engineering unique environmental arrangements of matter, unique structures of minds, or something else entirely.

So one day our descendants may discover worlds five, six, and so on: sources of moral value qualitatively distinct and superior to what we have access to, in the same way that grounding morality purely in game theory or entropy feels foolish to most experiencing humans today.

If true, this is a big deal! [4]

This seems overall quite possible to me. But is it probable?

I don't have a good sense of high likely this all is. Trying to estimate it feels beyond my forecasting or philosophical competence. But it seems plausible enough, and interesting enough, that I wanted to bring it to people's attention, in case other people have ideas on how to extend it.

Appendix A:

Existing literature: This concept is widespread but undertheorized. Mill's qualitative distinction among pleasures can point us in this direction; Bostrom's "Letter from Utopia" is the most vivid articulation ("What I feel is as far beyond feelings as what I think is beyond thoughts"); Danaher (2021) coined "axiological possibility space"; Ord's The Precipice argues we have "barely begun the ascent" and our investigations of flourishing may be "like astronomy before telescopes." According to a search from Claude, Nagel, Jackson, and Chalmers "collectively demonstrate that the space of possible conscious experiences vastly exceeds human experience." Banks's concept of Subliming: where "the very ideas, the actual concepts of good, of fairness and of justice just ceased to matter", is the most philosophically precise depiction I've seen in science fiction.

[1] Though I've seen shades of it in academic philosophy, EA/longtermist writing, science fiction/fantasy, and discussions of religion

[2] This is disputed.

[3] eg entropy as the guiding factor of morality, a la Beff Jezos.

[4] And if false, but convincing enough to be an attractor state for our descendants, this will sadly also be a very big deal.

On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here:

The primary complaints:

  1. First Amendment retaliation. Anthropic alleges that Pentagon officials illegally retaliated against the company for its position on AI safety. They argue that Trump, Hegseth, and others wanted to punish Anthropic for protected speech, citing public social media and other dialogue as evidence that the punishment is ideological in nature.  
  2. Misuse of the supply chain risk designation. Anthropic was officially designated a supply chain risk, which requires defense contractors to certify they don't use Claude in their Pentagon work. Anthropic argues that this is a misuse of the SCR designation which Congress intended to stop foreign actors, and that Anthropic clearly does not pose a supply-chain risk in a reading of the law.
  3. Lack of Due Process (Fifth Amendment violation). "The Challenged Actions arbitrarily deprive Anthropic of those interests without any process, much less due process."
  4. Ultra vires. Anthropic alleges that the Presidential Directive requiring every federal agency to immediately cease all use of Anthropic’s technology exceeds the limits of the President's authority as granted by Congress.
  5. Administrative Procedural Act. Similar to the above, Anthropic argues that administration violates the administrative procedural act, and the sanctions are not permitted to the relevant agencies as a duty granted to them by Congress. 

IANAL etc. in my personal opinion #2 seems very clearcut as a common-language and precedent reading of these things. #1 also seems strong. Sources I randomly skimmed online thought #3-#5 had a good case too, but I don't have an independent view.

The DC complaint looks less meaty (and I didn't read it)

I think they were laughed at enough after the Wired article (from here and elsewhere) that maintaining the previous line was no longer tenable for them. 

I also separately think their current stated position is more correct than the previous one, but I'm just observing that the incentives are a larger fraction of the story than what ppl might otherwise be reading them as. 

This exact line was later used in the Anthropic lawsuit against the DoW/DoD:

Department officials have even expressed concerns about the consequences of losing access to Claude.30 Describing the dispute between Anthropic and the Department, one official stated that “[t]he only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.” 31

Which I think is further evidence for my original contention that saying this was a strategic error.

So I agree that humanity might just choose not to reach the stars. It seems unlikely to me that nobody (or nobody with sufficient resources) would want to do this post-AGI, but it's possible humanity as a whole prevents other people from expanding (eg worries about building independent power centers that might harm the safety of Earth, or spoilt negotiations, or more idiosyncratic factors). 

This is not the most likely existential risk imo, but certainly one to be aware of.

That said, the 1960s-70s moon landing was a large net resource loss. Costed ~ half a percentage point of GDP (!) annually for multiple years and didn't get anything in return other than a few innovations and one-upping the Soviets. Seems like a pretty different story!

There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely. 

Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you've colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process.

I don't think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don't think the intuitive "slow waves of colonization" model makes a lot of sense on a galactic scale. 

I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc field really believes it.

Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe.

I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology.

My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first "serious" interstellar space probe, rather than do it as soon as possible[1].

Whether you can reach the furthest galaxies is determined by something like[2]:

total time to reach a galaxy = delay + distance/speed 

So you want to delay and keep researching until the marginal speed gain from additional R&D time is lower than the marginal cost of the delay. 

I don't have a sense of how long this is, but intuitively it feels more like decades and centuries, maybe even slightly longer, than months or years. The theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1/100 millionth of c [4].

For energy/resource reasons you might want to expand to nearby star systems first to send the fastest possible probes but note again that delay before sending your first probe is always at worst a constant amount of time. There's the possible exception of being able to accelerate R&D in other star systems, e.g. because you need multiple star systems of compute in order to do the R&D well. But this is trickier than it looks! The lightspeed communication barrier means sending information is slow, so you're really giving up a lot in terms of latency to use up more compute. A caveat here is that you might want your supercomputer to be bigger than the home system's resources. So maybe you want to capture a nearby star system to turn that into your core R&D department. Though that takes a while to build out, too.

Here are a few models of space colonization that I think are more likely:

  1. Model 3 (Deliberate + build in the home system, then spam): Research deeply until reaching ~very deep technological levels [3] then suddenly spam a ton of probes everywhere at very high fractions of c. I think this is the implicit model in Sandberg's Space Races' paper.
    1. Assuming technological maturity, Sandberg had two different models for exploration:
      1. one that races earlier to grab nearby interstellar resources
      2. and one that waits longer and saves up to send faster probes (where the constraint on probe speed is primarily energy, not knowledge)
    2. In the technological maturity case, Sandberg concludes that racing earlier is better.
    3. Note however that Sandberg’s model presumes technological maturity, so in a sense his analysis starts a bit after where mine ends.
    4. I think his model is roughly correct in worlds where reaching technological maturity is relatively quick. This assumption seems plausible enough to me, but not guaranteed.
  2. Model 4 (Colonize in nearby systems, deliberate in home systems, then spam): Colonize nearby systems first, while continuously researching at home. Turn nearby systems into Dyson swarms etc so they have massive (and flexible!) industrial capacity, while simultaneously researching in the home system what are the best ways to send fast ships.
    1. In this model you’re first colonizing systems within say 50-500 years (not light-years, years) of your home system
    2. And then once your home systems figure out the optimal way to send probes, they tell the colonies what to do next (at the speed of light), and the colonies (plus maybe the home system too at that point) spam probes at near the speed of light.
  3. Model 5 (Waves: Deliberate, spam, deliberate, spam, deliberate…): The home system spends enough time thinking/researching/building until they’ve reached a plausible plateau. They send probes out for a while, roughly until distances are such that sending more probes from home will be slower to reach distant shores than it’d take probes from colonies to reach them. Then they switch back to deliberation mode and keep deliberating until/if they invent a new mode of transportation that’s faster than the head start colonies have, then start sending probes again to distant stars, intending to overtake the front wave of probes from the colonies at sufficiently distant stars.
    1. The colonies repeat the same strategy as home, first building up and sending a bunch of probes out, and then switching to “research” mode.
    2. This keeps going until we are very confident you can’t send faster ships, and then the expanding core switches from deliberation and spread into spending their energy on more terminal moral goods.
  4. Model 6 (Deliberate, spam, deliberate and signal): Like the previous model, but after the first wave of probes, the home system  (and other systems in the expanding core) no longer sends more probes. Instead they switch to spending all their time on research and deliberations. If/when they discover a faster mode of transportation, they signal (at exactly c) the new strategy to distant systems at the frontier, to switch their expansion technology.
    1. Compared to Model 5, this strategy has the advantage of the speed of light always being faster than whatever mode of transportation you have for physical ships. So if your colonies are “on the way” to distant stars, it’s always faster to tell the colonies what to do than to send your own probes.
    2. This strategy might seem strictly superior to Model 5. But this isn’t necessarily the case! For example, galaxies tend to be ~2D, whereas the affectable universe is ~3D. So for different galaxies not on the same plane, it might often be more efficient to send probes directly than to wait for light-speed communication to hit a colony on the current frontier, and then send a probe from there.
      1. The galactic disk is ~1000 light-years thick but intergalactic targets can be millions of light-years away in any direction, so for most target galaxies there's no frontier colony meaningfully closer than the home system, or at least the “core”.
  5. Model 7:??? Excited to hear other models I haven’t thought of!

 

I’m neither an astrophysicist nor in any other way a “real” space expert and I’ve spent less than a day thinking about the relevant dynamics, so let me know if you think I’m wrong or you have additional thoughts! Very happy to be corrected. :) 


[1] Modulo other reasons for going faster, like worries about single-system x-risk, stagnation, meme wars etc. There are also other reasons to go slower, for example worries about interstellar x-risks/ vulnerable universe, wanting more value certainty and fear of value drift, being scared of aliens, etc.

[2] + relativistic effects and other cosmological effects that I don't understand. I never studied relativity but I'd be surprised if it changes the OOM calculus.

[3] where we predict additional research time yields diminishing returns relative to acting on current knowledge

[4] See also earlier work by Kennedy. Kennedy 2006's 'wait calculation' formalizes a version of this tradeoff for nearby stars and gets centuries-scale optimal delays, though his model doesn't consider the intergalactic case and has additional assumptions about transportation speeds that I’m unsure about.

Know Your Meme says it started off as video game jargon; my impression is that it's pretty common online outside of that.

"The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting.

I'm confused. Why would you ever say this before a negotiation :O

Linch
4
3
0
1
60% disagree

How much of a post are you comfortable for AI to write?

One day the AIs can be much better at writing but that day is not today

Load more