From Forrest Landry’s essay:

Did you know that if we Earthlings did set up a colony on Mars, that the two cultures would eventually diverge over time simply because of communication constraints?

This means that it is also the case that the technological development paths of the two planets will eventually diverge significantly too.

This divergence would eventually reach a point where each planet, cannot ever actually truly track which types of weapons of mass destruction that 'the other world' could potentially have developed and use against their own world.

They each would not know about the other, what specific forms of the total harm that the other could likely cause. They could only know that each had the full capability, at any moment, without any warning, and with no possible defence, to completely and utterly destroy their own world (total ecocide).

This inevitably leads to an unstable 'assured destruction' situation. One or the other of those planets will for sure take the '1st strike advantage' and thus totally annihilate the other.

-18

1
4

Reactions

1
4
Comments12
Sorted by Click to highlight new comments since: Today at 7:54 AM

The required bandwidth and latency needed just is not there, and it is just not possible for the two cultures to truly stay in sync -- to be, become, and remain indefinitely, one culture. There will eventually be cultural divergences, and discrepancies, which will increase over time, due to the kinds of inherent non-linearity of process inherent in all human cultures and processes.

Seems like a motte and bailey argument. Either the claim is there would be some differences, which seems true but irrelevant, or that would necessarily be huge discrepancies, which seems false. For a long time the UK and Australia had a communication latency of months, but I don't see any evidence that the Aussies had any desire to attack the mother country, even if they could. Mars is close enough that people could go back and forth, both temporarily for work and permanently as immigrants, including weapons inspectors if required.

The claim is there would be some differences, in the short term (dozens of years), and that these would necessarily _become_ huge discrepancies, over the long term (hundreds or thousands of years).  This amplification of difference is inevitable due to multiple nonlinear effects operating in all types of life ecosystems, cultures, etc.  A few thousand years is a trivial timescale when it comes to planetary evolution dynamics and life cycles, which is generally measured in billions of years.

Yeah I really don't see why these huge discrepancies, large enough to force a war, are inevitable when they are close enough we could watch the same evening news, tweet on the same twitter, and even visit each other's planets for to get our PhDs before returning home. The British and Roman Empires managed to stick together for a pretty long time despite much worse communication gaps, and their collapses wasn't related to increasing cultural divergences among colonists.

Of course, absent technology, the rate of cultural evolution, and thus of divergence, for the British and Roman Empires is very much slower than it would be for modern technology-enabled planets.  The rate of change (and thus of divergence) of both of these historical examples was very slow, comparatively, to our our 'evening news watching' society. Hence, historically they did go "a long time" without either ever developing anything even close to the necessary tech capability to completely kill the other one.  Ie; it is not just "the total elapsed time" as it is the "net aggregate functional difference over the accumulative rate of change" that matters.

Also, I notice that our modern 'broadcast news watching society' has become, also because of technology (and increasingly) more and more politically polarized and socially balkanized. Surely it can be suggested that this is at least some sort of evidence of technology being associated with cultural change, and with the rate of cultural change, and thus hence also of the overall eventual degree of divergence -- ie; and this in addition to prior cultural changes being the reason for new technologies being/becoming developed, and hence of eventually even more increased divergence, etc (for example, because of the historical emergence of rationalism, the "western enlightenment", etc, over the last few hundred years).  To go from there and to notice that 1; strong overall societal polarization and local balkanization combined with 2; rapidly advancing and increasingly divergent technological capabilities, along with 3; no common necessary basis of mutuality of survival (ie, MAD and the reality of global nuclear winter) eventually leads to 4; very high first strike game-theoretic potentials and thus, with high levels of at least some type of tech power, significant culture destroying consequences somewhere.

[Just commenting on the part you copied]

Feels way too overconfident. Would the cultures diverge due to communication constraints? Seems likely, though also I could imagine pathways by which it wouldn't happen significantly, such as if a singleton was already reached.

Would technological development diverge significantly, conditional on the above? Not necessarily, imho. If we don't have a self-sufficient colony on Mars before we reach "technological maturity" (e.g., with APM and ASI), then presumably no (tech would hardly progress further at all, then).

Would tech divergence imply each world can't truly track whatever weapons the other world had? Again, not necessarily. Perhaps one world had better tech and could just surveil the other. 

Would there be a for-sure 1st strike advantage? Again, seems debatable.

Etcetera.

I can see how the “for sure” makes it look overconfident.

Suggest reading the linked-to post. That addresses most of your questions.

As to your idea of having some artificial super-intelligent singleton lead to some kind of alignment between or technological maturity of both planetary cultures, if that’s what you meant, please see here: https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

Maybe there is a simpler way to state the idea:.

  • 1; would the two (planetary) cultures diverge?  
    • (yes, for a variety of easy reasons).
  • 2; would this divergence become more significant over time? 
    • (yes, as at least some of any differences will inherently be amplified by multiple factors on multiple levels of multiple types of process for multiple reasons over multiple hundreds to thousands of years, and that differences in any one planetary cultural/functional aspect tend to create and become entangled with differences in multiple other cultural/functional aspects).
  • 3; would the degree of divergence, over time, eventually become significant -- ie, in the sense that it results in some sort of 1st strike game-theory dynamic? 
    • (yes, insofar as cultural development differences cannot not also be fully entangled with technological developmental differences).

So then the question becomes: "is it even possible to maybe somehow constrain any or all of these three process factors to at least the minimum degree necessary so as to adequately prevent that factor, and thus of the overall sequence, from occurring?".  

In regards to this last question, after a lot of varied simplifications, it becomes eventually and finally equivalent to asking: "can any type of inter-planetary linear causative process (which is itself constrained by speed-of-light latency limits) ever fully constrain (to at least the minimum degree necessary) all types of non-linear local (ie; intra-planetary) causative process?".  

And the answer to this last question is simply "no", for basic principled reasons.

JWS
7mo11
1
0

You seem to have been very taken with Forrest Landry and his views, Remmelt. (Apparently for around 2 years now, judging by your LW history)

My observation is that it leads you to take positions of extreme certainty on issues where there isn't the epistemological warrant for it. (I know you're simply reporting Forrest's work here, but I assume you support it)

For example, in this case you use the term inevitably. You think you can guarantee that this will be the outcome of the future where humans settle on Mars? In his reply to Larks, Forrest makes strong claims about the nature of human civilization thousands of years into the future.[1] And you seem to claim that development of AI to the point of 'self-sufficient learning machinery' will lead to the extinction of humanity with probability 100%.

First, humanity's extinction is 100% on a long-enough timeline, so it's not a very useful claim to make.

Second, Forrest's writing and arguments are very hard to parse. Now, you could claim it's on the reader to thoroughly vet the arguments, but is it not also on a thinker to express their ideas clearly? Constantly telling people to read what they find confusing again, and in more detail, is unlikely to pay dividends.

Third, often your response to criticism or reports of confusion is to shrug it off. In your reflections on Paul Cristiano's reponse to Forrest (see here and follow-on comments, beware it's a bit of a rabbit-hole), you imply that this reaction is mostly bias instead of reasoning. But Cristiano and the AI Safety community could simply do the same to you, so I feel no more persuaded to your case for seeing these complaints.

Suffice to say, I'm sceptical of this (in a similar way to Cristiano) regarding the certainty with which you make your claims. Simply put, remember Cromwell's rule:

I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

Having said that, it seems to be a research area that you are passionate about and inspired by, so more power to you. I think it'd potentially be a good tactic for persuasion, if you were to present Forrest's arguments into a more clear manner for those who don't share his epistemic/philosophical background, or at least consider what might make you question the infallibility of Forrest and his perspective.

 

  1. ^

    For those who think this also applies to (strong) longtermism, yes it does! I think extinction is probably the only provable long-term intervention that can defeat cluelessness

1A; re; "on a long enough timeline" and "not a useful claim to make": The timeline indicated in the claim is "up to appx 1000 years".  Insofar as humanity has already been on the earth 100X that, and could presumably be around (assuming the absence of tech) for at least that much longer, the stated claim differential is meaningful.  And then there is life itself, which would presumably last at least another 500 million years, again when assuming the counterfactual of a complete absence of tech.  Overall, this feels like a 'tl;dr', obviating important details.

2A; "Hard to parse" does not directly imply "not epistemically warranted".  To state otherwise would be an equivocation.  To make an assessment of whether 'warranted' or not, would be to have parsed the argument(s), and then evaluated the premises, and correctness/appropriateness of the transforms, etc.  Otherwise, until an actual valid parse has been done, all that can be known to the reader (and observers) is that "claims have been made".

Nor does "hard to parse" directly imply 'not valid' (ie; correctness) and/or 'not sound' (ie; relevance).  Code can be hard to read, and still be correctly executed on a computer to accomplish some practical real world purpose.

Also, 'hard to parse' is a presentation concern, and thus a social issue, rather than a logical issue.  Ie, it is more about rhetoric, rather than about reason, rationality, and truth.  While in social process, both matter, they matter in different and in largely non-overlapping ways.  The concerns stated (and the associated claims) need to be kept separate.  That is merely proper discipline, and anything otherwise is probably just politics (ie, is not really about cooperation in the interest of identifying important relevant truths).  Where possible, certainty is always of interest, for its own sake.  And it cannot be obtained by any form of rhetoric.

1B; In regards to 'strong claims about X' and 'thousands of years into the future', there is nothing inherently impossible, reasonable, or impractical about any such conjunction.  The heuristic of being a-priori skeptical about any claim in that form is often warranted, but not universally.  The interesting cases are the exceptions.  To assume, without additional examination (parse), which such claims are to be rejected without further review is simply to say "I did not read and evaluate the argument" (and therefore do not know for sure), and not that "the argument is for sure incorrect" (based on what, exactly?).

2B; It is in this sense that Remmelt has shown up differently than most: he spent more than 6 months carefully going over (actually parsing) and challenging -- and attempting to reject -- every single aspect of the arguments I presented.  This happened until he had convinced himself as to their merit -- and not because I convinced him.  Like nearly everyone else, he really really did not want to be convinced.  He was as skeptical as anyone.  We both had to be patient with one another -- me to write responses -- and him to actually read (parse) and think up new (relevant) questions to ask.  This history is largely the reason, that he recommends that people read,. and do more of their own work -- it is the adult standard he held for himself, and he naturally has that expectation as a bias regarding others.  At this point, even though no one likes the conclusion, we both feel that it is overall better to know an uncomfortable truth than to remain blind.

3A; You ask Remmelt to "reconsider the infallibility of the person" rather than to 'maybe reassess the correctness of the argument'.  The infallibility request seems to hide another subtle equivocation and an ad-hominem.  We both know that I am human.  What matters is if the argument is actually correct/relevant.  We all know has very little to do with myself as a person.  To implicitly suggest that he is 'deluded' because you are skeptical (in the absence of actual neutrally interested evaluation) is not really an especially 'truth seeking' action.

3B; We are both aware that my style of writing, argumentation, and conversing is not easy to read (or parse).  It is agreed that this is unfortunate, for everyone, including us.  My (necessarily temporary) offer to write at all (and to give of my time to do so -- to assist with others understanding or misunderstanding, and/or to add clarity, where possible, etc) is not infinite, indefinite, or without actual cost, effort, and loss.  So I tend to be a bit sparing with where and when, and with who, I will give such time and attention, and concentrate my "argumentation efforts", in fewer places, and with fewer people. Usually those who have done a lot of their own work at their own initiative, with patience, clarity, discipline, etc, and to not be showing up with various warning flags of motivated reasoning, actions of rhetoric, known logical falsities presented as truths, etc.

My choices with respect to maybe providing rebuttals to any adverse commentary on a public forum, remain my own, on a volunteer basis.  If it happens that I do not choose to respond (or cannot, due to disability), that does not in itself "make" the underlying argument any more or less valid, or relevant, it simply affects whether or not you (the reader, the observer) happen to understand it (as per your own choices regarding your investment of time, etc).  That means, mostly, that if you want clarity, you will probably have to seek it yourself, at your own effort, as Remmelt has done. To at least some extent your skepticism, and how you handle it, is your choice.  We all know that it is all unfortunate, unpleasant -- the whole package of claims -- but that just is the way it is, at this moment, at least for now.  

Adults will do what is necessary, even if it is hard; even if all of the children around them will ever continue to want live in some sort of easy fantasy.

Hi Forrest, thanks for replying

I want to make it clear that I bear you or Remmelt no ill will at all, and I apologise if my comment gave the opposite impression in any way. 

For an actual argument regarding 1A & 1B, I suppose I'd point towards David Deutsch's argument that predicting the future orbit of the planet Earth depends both on our projections of planetary physics (~rock solid) but also the future growth of human knowledge, including social knowledge (~basically impossible). So in the Mars/Earth example, the issue of communication latency and physical differences of the planets would absolutely remain, but human knowledge and social/societal dynamics is harder to predict.

On 2A & 3B, you are absolutely logically correct that arguments being hard to understand has no necessary correlation with their validity or soundness. My concern here is perhaps better phrased as that arguments being hard to parse does effect others understanding them. As you mention in 3B, your offer to write comes at a cost (and I appreciate the time you have already spent here), but so does trying to understand your perspective from my (and seemingly other EAs) point of view. There's no free lunch here, increasing knowledge takes energy and effort! But just like you, I am flawed and human, and I can't increase knowledge in all areas at once.

And so, regarding 2B & 3A, I actually really respect Remmelt for diving in and immersing himself in your work, taking the time to communicate with you and understand your perspective, and coming to a conclusion that he doesn't want to. A better phrasing of my final paragraph was maybe suggesting Remmelt think about ways to facilitate an introduction to your style of thinking, to make it easier for those who do want to pursue your perspective, while allowing you to continue working on the issue that you think are most important without being concerned about my opinions or that of the EA Forum/Community more generally (a more obvious statement never was written!).

In any case, I apologise if I came off as overly dismissive, and I sincerely wish both you and Remmelt well.

Hello JWS,

Thank you for the kind reply.  And I am basically agreed with you.  Communicating clearly is important, and I continue to commit to attempting, best as I can, to doing so, (assuming I also continue to have the personal energy and time and ability, etc).

Mostly, given my own nature, I have been preferring to attempt to enable other, better community communicators than myself, in 'direct' type messaging to them personally, in ways designed for their specific understanding.  Hence I am more often having colleagues (like Remmelt) indirectly post things on my behalf, in their own words, if they choose to do so, and even more preferably, as their own work (in the case of x-risk particularly) simply because that is more often a better way of getting necessary things out there than depending at all on my own reputation and/or basic inability to promote my own work.  This leaves me with more time and opportunity to explore more at the edges  -- and although that occurs maybe also at the expense of others having easy access to the results, in some more understandable manner.  Naturally, this only really works if my enablement of others is actually that -- that they understand, at least sufficiently fully, the reasoning, so that they can adequately defend whatever points (again, on their choice to invest their time to do such a thing, based on their values, capabilities, etc), and thus, in turn, enable others to understand, etc.  Usually, I am exploring concepts in places that most other people will ignore for various reasons, even though often these topics end up being very important overall.

In any case, I usually prefer to be less public, and posting on any sort of an open forum like this one is far from my usual habit.  Remmelt, in particular, has elected to keep at least some association of my work with my person, mostly out of regard for my friendship, despite my continued concern that this may end up actually being disadvantageous to him personally. 

This current case -- my posting now, here, directly as myself -- is something of an exception, insofar as it had not really been my intention, even yesterday, not to even mention the larger long term concerns I have been having about the who Earth-Mars thing outside of a few private conversations.  I had, in personal conversation, indicated at some time previously -- months ago -- that I "should document the argument", and it had been something of a side conversation for a while.  Yet somehow, it ended up getting mentioned explicitly on Twitter, by Remmelt, with a brief summary explanation of the logic and a few links to transcriptions of some of my direct voice messages to him.  Now the Earth-Mars conclusion was, all of the sudden, getting some external attention -- rather more than I was ready for -- and I found myself this morning attempting to get at least a mostly better, somewhat fuller version of the reasoning down in writing, to replace the much more informal private voice messages, responding to specific questions, etc.  Hence the linked post.  It did not seem right that anyone else should attempt to defend a logic so recently given (in contrast with the AI/AGI 'substrate needs' work, which has been discussed in detail at great length, written about extensively, etc).

So my linked post on "the Mars colony problem" is rather more quickly assembled, and not as well written, and as up to my own standards as I would normally like, and it consists of a bit of a jumble of different conversations all piled together, each of which contains some aspect of the basic through-line of the primary reasoning.  Remmelt wrote the 1st part, and I added the rest.  Even though my internal notes goes back years, and is well validated, something like this generally needs more than a three hours of my time to document even reasonably well. So, yes, my post needs to be re-written, and clarified, and made more accessible, to a wider range of people, and use less opaque language, and not so many tangents, and be easier to parse, etc.  Hopefully I will have time for it in the next few weeks.

As such, because there has been less time for anyone other than myself to have sufficient exposure to the underlying logic, basis, and rationale, I am thus here posting a at least a partial defense of it, as myself, rather than attempting to be relying on anyone else to do it (because it is right to do, etc).  Given its rather quickly written nature, not discussing all of the cases and conditions, etc, it will probably get more than its fair share of critiques on this (and other) forums, where it has ended up getting posted.  At least I will get some feedback on what sort of things I will need to add to make it more defensible.

However, obviously, since there are probably far more people with critical views on each large forum like this, and also people who have more time to post than I have time to answer, there is a very good chance that I will not be able to make the reasoning as clear as I would like, to as many responders as I would need to, and thus that there will therefore be a lot of unaligned misunderstandings.  Ie, in any 'intellectual evaluative' public space, it is far more likely that negative reactive emotional judgements would occur, proportional to both the social scale and the stakes -- which may seem surprising, but actually makes sense when considering the self definition and skillset(s) of that demographic.

Thus, all that I can ask in the interim is for people to please be at least a little patient and tolerant while we compose at least some better way to make it more easily understandable as to why it may be the case that we can actually predict some relevant aspects of what would very likely necessarily happen over larger scales/volumes of human social and technical process over longer time intervals, despite the otherwise appearance of this usually being impossible/unreasonable.

Thank you.

To the extent that this is based on game theory, it's probably worth considering that there may well be more than just 2 civilizations (at least over timescales of hundreds or thousands of years).

As well as Earth and Mars, there may be the Moon, Venus, and the moons of Jupiter and Saturn (and potentially others, maybe even giant space stations). As such, any unwarranted attack by one civilization on another might result in responses by the remaining civilizations. That could introduce some sort of deterrent effect on striking first.

Curated and popular this week
Relevant opportunities