Hide table of contents

The Global Priorities Institute’s research agenda says:

The idea of the long reflection is that of a long period—perhaps tens of thousands of years—during which human civilisation, perhaps with the aid of improved cognitive ability, dedicates itself to working out what is ultimately of value (INFORMAL: MacAskill 2018; Lewis 2018). It may be argued that such a period would be warranted before deciding whether to undertake an irreversible decision of immense importance, such as whether to attempt spreading to the stars. Do we find ourselves, or are we likely to find ourselves, in a situation where a ‘long reflection’ would in fact be warranted? If so, how should it be implemented?

I think this is a fascinating idea. And apparently Toby Ord’s book The Precipice (out today!) and Will MacAskill’s upcoming book may include discussion of the idea, which I’m looking forward to.

But as best I can quickly tell, there seemed to be very few publicly accessible sources on the long reflection (at least before Ord’s book). So I thought I’d make a quite unambitious post that just collects all relevant quotes I’ve found after looking through all the Google hits for “the "long reflection" macaskill” and through all posts on the EA Forum and LessWrong that came up when I searched "long reflection". At the end, I also list some other related work and concepts.

Please comment to let me know if you’re aware of any other sources which I haven’t mentioned here.

EDIT: There has now been a bit more discussion of the long reflection (including in The Precipice) than there had been when I posted this, so I've started collecting a more complete list of sources in this comment.

80,000 Hours interview with MacAskill

Quote from 80,000 Hours’ summary

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:

  • How would we go about a ‘long reflection’ to fix our moral errors?
  • [others]

Quotes from the interview itself

Will MacAskill: If you really appreciate moral uncertainty, and especially if you look back through the history of human progress, we have just believed so many morally abominable things and been, in fact, very confident in them. [...]

Even for people who really dedicated their lives to trying to work out the moral truths. Aristotle, for example, was incredibly morally committed, incredibly smart, way ahead of his time on many issues, but just thought that slavery was a pre-condition for some people having good things in life. Therefore, it was justified on those grounds. A view that we’d now think of as completely abominable.

That makes us think that, wow, we probably have mistakes similar to that. Really deep mistakes that future generations will look back and think, “This is just a moral travesty that people believed it.” That means, I think, we should place a lot of weight on moral option value and gaining moral information. That means just doing further work in terms of figuring out what’s moral the case. Doing research in moral philosophy, and so on. Studying it for yourself.

Secondly, into the future, ensuring that we keep our options open. I think this provides one additional argument for ensuring that the human race doesn’t go extinct for the next few centuries. It also provides an argument for the sort of instrumental state that we should be trying to get to as a society, which I call the long reflection. We can talk about that more.

Robert Wiblin: Humanity should thrive and grow, and then just turn over entire planets to academic philosophy. Is that the view? I think I’m charitable there.

Will MacAskill: Yeah, obviously the conclusion of a moral philosopher saying, “Moral philosophy is incredibly important” might seem very self-serving, but I think it is straightforwardly the implication you get if you at least endorse the premises of taking moral uncertainty very seriously, and so on. If you think we can at least make some progress on moral philosophy. If you reject that view you have to kind of reject one of the underlying premises.

[...]

Robert Wiblin: Before, you mentioned that if humanity doesn’t go extinct in the future, there might be a lot time and a lot of people and very educated people who might be able to do a lot more research on this topic and figure out what’s valuable. That was a long reflection. What do you think that would actually look like in practice, ideally?

Will MacAskill: Yeah. The key idea is just, different people have different sets of values. They might have very different views for what does an optimal future look like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that.

I think there is an answer. I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.

In just the same way as if you think as an individual, how much time should you reflect in your own values before choosing your career and committing to one particular path.

Robert Wiblin: Probably at least a few minutes. At least .1% of the whole time.

Will MacAskill: At least a few minutes. Exactly. When you’re thinking about the vastness of the potential future of civilization, the equivalent of just a few minutes is tens of thousands of years.

Then, there’s questions about how exactly do you structure that. I think it would be great if there was more work done really fleshing that out. Perhaps that’s something you’ll have time to do in the near future. One thing you want to do is have as little locked in as possible. So, you want to be very open both on… You don’t want to commit to one particular moral methodology. You just want to commit to things that seem extremely good for basically whatever moral view you might think ends up as correct or what moral epistemology might be correct.

Just people having a higher IQ but everything else being equal, that just seems strictly good. People having greater empirical understanding just seems strictly good. People having a better ability to empathize. That all seems extremely good. People having more time. Have cooperation seems extremely good. Then I think, yeah, like you say, many different people can get behind this one vision for what we want humanity to actually do. That’s potentially exciting because we can coordinate.

It might be that one of the conclusions we come to takes moral uncertainty into account. We might say, actually, there’s some fundamental things that we just can’t ultimately resolve and so we want to do a compromise between them. Maybe that means that for civilization, part of civilization’s devoted to common sense, thick values of pursuit of art, and flourishing, and so on, whereas large parts of the rest of civilization are devoted to other values like pure bliss, blissful state. You can imagine compromise scenarios there. It’s just large amounts of civilization… The universe is a big place.

Quotes from an AI Alignment Podcast interview with MacAskill

Will MacAskill: In terms of answering this alignment problem, the deep one of just where ought societies to be going [working out what’s actually right and what’s actually wrong and what ought we to be doing], I think the key thing is to punt it. The key thing is to get us to a position where we can think about and reflect on this question, and really for a very long time, so I call this the long reflection. Perhaps it’s a period of a million years or something. We’ve got a lot of time on our hands. There’s really not the kind of scarce commodity, so there are various stages to get into that state.

The first is to reduce extinction risks down basically to zero, put us a position of kind of existential security. The second then is to start developing a society where we can reflect as much as possible and keep as many options open as possible.

Something that wouldn’t be keeping a lot of options open would be, say we’ve solved what I call the control problem, we’ve got these kind of lapdog AIs that are running the economy for us, and we just say, “Well, these are so smart, what we’re gonna do is just tell it, ‘Figure out what’s right and then do that.'” That would really not be keeping our options open. Even though I’m sympathetic to moral realism and so on, I think that would be quite a reckless thing to do.

Instead, what we want to have is something kind of … We’ve gotten to this position of real security. Maybe also along the way, we’ve fixed the various particularly bad problems of the present, poverty and so on, and now what we want to do is just keep our options open as much as possible and then kind of gradually work on improving our moral understanding where if that’s supplemented by AI system …

I think there’s tons of work that I’d love to see developing how this would actually work, but I think the best approach would be to get the artificially intelligent agents to be just doing moral philosophy, giving us arguments, perhaps creating new moral experiences that it thinks can be informative and so on, but letting the actual decision making or judgments about what is right and wrong be left up to us. Or at least have some kind of gradiated thing where we gradually transition the decision making more and more from human agents to artificial agents, and maybe that’s over a very long time period.

What I kind of think of as the control problem in that second level alignment problem, those are issues you face when you’re just addressing the question of, “Okay. Well, we’re now gonna have an AI run economy,” but you’re not yet needing to address the question of what’s actually right or wrong. And then my main thing there is just we should get ourselves into a position where we can take as long as we need to answer that question and have as many options open as possible.

Lucas: I guess here given moral uncertainty and other issues, we would also want to factor in issues with astronomical waste into how long we should wait?

Will: Yeah. That’s definitely informing my view, where it’s at least plausible that morality has an aggregative component, and if so, then the sheer vastness of the future may, because we’ve got half a billion to a billion years left on Earth, a hundred trillion years before the starts burn out, and then … I always forget these numbers, but I think like a hundred billion stars in the Milky Way, ten trillion galaxies.

With just vast resources at our disposal, the future could be astronomically good. It could also be astronomically bad. What we want to insure is that we get to the good outcome, and given the time scales involved, even what seem like an incredibly long delay, like a million years, is actually just very little time indeed.

Lucas: In half a second I want to jump into whether or not this is actually likely to happen given race dynamics and that human beings are kind of crazy. The sort of timeline here is that we’re solving the technical control problem up into and on our way to sort of AGI and what might be superintelligence, and then we are also sort of idealizing everyone’s values and lives in a way such that they have more information and they can think more and have more free time and become idealized versions of themselves, given constraints within issues of values canceling each other out and things that we might end up just deeming to be impermissible.

After that is where this period of long reflection takes place, and sort of the dynamics and mechanics of that are seeming open questions. It seems that first comes computer science and global governance and coordination and strategy issues, and then comes long time of philosophy.

Will: Yeah, then comes the million years of philosophy, so I guess not very surprising a philosopher would suggest this. Then the dynamics of the setup is an interesting question, and a super important one.

One thing you could do is just say, “Well, we’ve got ten billion people alive today, let’s say. We’re gonna divide the universe into ten billionths, so maybe that’s a thousand galaxies each or something.” And then you can trade after that point. I think that would get a pretty good outcome. There’s questions of whether you can enforce it or not into the future. There’s some arguments that you can. But maybe that’s not the optimal process, because especially if you think that “Wow! Maybe there’s actually some answer, something that is correct,” well, maybe a lot of people miss that.

I actually think if we did that and if there is some correct moral view, then I would hope that incredibly well informed people who have this vast amount of time, and perhaps intellectually augmented people and so on who have this vast amount of time to reflect would converge on that answer, and if they didn’t, then that would make me more suspicious of the idea that maybe there is a real face to the matter. But it’s still the early days we’d really want to think a lot about what goes into the setup of that kind of long reflection.

[The discussion from that point to "If it’s the case that there is a right answer." are also very relevant.]

[See also Rohin Shah's brief summary of and commentary about this interview.]

Cause prioritization for downside-focused value systems by Lukas Gloor

Quote from the article

I’m using the term downside-focused to refer to value systems that in practice (given what we know about the world) primarily recommend working on interventions that make bad things less likely. [...]

By contrast, other moral views place great importance on the potential upsides of very good futures [...] I will call these views upside-focused.

[...]

Some people have argued that even (very) small credences in upside-focused views [which roughly means moral views which place great importance on the potential upsides of very good futures], such as 1-20% for instance, would in itself already speak in favor of making extinction risk reduction a top priority because making sure there will still be decision-makers in the future provides high option value. I think this gives by far too much weight to the argument from option value. Option value does play a role, but not nearly as strong a role as it is sometimes made out to be. To elaborate, let’s look at the argument in more detail: The naive argument from option value says, roughly, that our descendants will be in a much better position to decide than we are, and if suffering-focused ethics or some other downside-focused view is indeed the outcome of their moral deliberations, they can then decide to not colonize space, or only do so in an extremely careful and controlled way. If this picture is correct, there is almost nothing to lose and a lot to gain from making sure that our descendants get to decide how to proceed.

I think this argument to a large extent misses the point, but seeing that even some well-informed effective altruists seem to believe that it is very strong led me realize that I should write a post explaining the landscape of cause prioritization for downside-focused value systems. The problem with the naive argument from option value is that the decision algorithm that is implicitly being recommended in the argument, namely focusing on extinction risk reduction and leaving moral philosophy (and s-risk reduction in case the outcome is a downside-focused morality) to future generations, makes sure that people follow the implications of downside-focused morality in precisely the one instance where it is least needed, and never otherwise. If the future is going to be controlled by philosophically sophisticated altruists who are also modest and willing to change course given new insights, then most bad futures will already have been averted in that scenario. An outcome where we get long and careful reflection without downsides is far from the only possible outcome. In fact, it does not even seem to me to be the most likely outcome (although others may disagree). No one is most worried about a scenario where epistemically careful thinkers with their heart in the right place control the future; the discussion is instead about whether the probability that things will accidentally go off the rails warrants extra-careful attention. (And it is not as though it looks like we are particularly on the rails currently either.) Reducing non-AI extinction risk does not preserve much option value for downside-focused value systems because most of the expected future suffering probably comes not from scenarios where people deliberately implement a solution they think is best after years of careful reflection, but instead from cases where things unexpectedly pass a point of no return and compassionate forces do not get to have control over the future. Downside risks by action likely loom larger than downside risks by omission, and we are plausibly in a better position to reduce the most pressing downside risks now than later. (In part because “later” may be too late.)

This suggests that if one is uncertain between upside- and downside-focused views, as opposed to being uncertain between all kinds of things except downside-focused views, the argument from option value is much weaker than it is often made out to be. Having said that, non-naively, option value still does upshift the importance of reducing extinction risks quite a bit – just not by an overwhelming degree. In particular, arguments for the importance of option value that do carry force are for instance:

  • There is still some downside risk to reduce after long reflection
  • Our descendants will know more about the world, and crucial considerations in e.g. infinite ethics or anthropics could change the way we think about downside risks (in that we might for instance realize that downside risks by omission loom larger than we thought)
  • One’s adoption of (e.g.) upside-focused views after long reflection may correlate favorably with the expected amount of value or disvalue in the future (meaning: conditional on many people eventually adopting upside-focused views, the future is more valuable according to upside-focused views than it appears during an earlier state of uncertainty)

The discussion about the benefits from option value is interesting and important, and a lot more could be said on both sides. I think it is safe to say that the non-naive case for option value is not strong enough to make extinction risk reduction a top priority given only small credences in upside-focused views, but it does start to become a highly relevant consideration once the credences become reasonably large. Having said that, one can also make a case that improving the quality of the future (more happiness/value and less suffering/disvalue) conditional on humanity not going extinct is probably going to be at least as important for upside-focused views and is more robust under population ethical uncertainty – which speaks particularly in favor of highly prioritizing existential risk reduction through AI policy and AI alignment.

My commentary

Much of the rest of that article is also somewhat relevant to the concept of the long reflection.

From memory, I think somewhat similar points are made in the interesting post The expected value of extinction risk reduction is positive, though that post doesn’t use the term “long reflection”.

Other places where the term was used in a relevant way

These are sources that explicitly refer to the concept of the long reflection, but which essentially just repeat parts of what the above quotes already say:

These are sources which may say something new about the concept, but which I haven’t read properly, so I don’t want to risk misleadingly pulling quotes from them out of context:

Some other somewhat relevant concepts

  • Bostrom’s concept of technological maturity: “the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.”
  • “Stably good futures”: “those where society has achieved enough wisdom and coordination to guarantee the future against existential risks and other dystopian outcomes, perhaps with the aid of Friendly AI (FAI).”
    • The post contrasts this against “Stably bad futures (‘bad outcomes’)[, which] are those where existential catastrophe has occurred.”
  • Option value

I hope you’ve found this post useful. Hopefully Toby Ord’s book and/or Will MacAskill’s book will provide a more comprehensive, detailed discussion of the concept, in which case this post can serve just as a record of how the concept was discussed in its early days. I’d also be interested to see EA Forum users writing up their own fleshed out versions of, critiques of, or thoughts on the long reflection, either as comments here or as their own posts.

And as I said earlier, please comment to let me know if you’re aware of any other relevant sources which I haven’t mentioned here.

Comments14
Sorted by Click to highlight new comments since: Today at 10:23 AM

Collection of sources that are highly relevant to the idea of the Long Reflection

(The differences between this comment and the post are that I'll keep this comment up to date, it will just list sources without including quotes, and it won't include some of the less relevant sources because there's now more work on the Long Reflection than there was when I made this post.)

In the new 80,000 Hours interview of Toby Ord, Arden Koehler asks:

Arden Koehler: So I’m curious about this second stage: the long reflection. It felt, in the book, like this was basically sitting around and doing moral philosophy. Maybe lots of science and other things and calmly figuring out, how can we most flourish in the future? I’m wondering whether it’s more likely to just look like politics? So you might think if we come to have this big general conversation about how the world should be, our most big general public conversation right now is a political conversation that has a lot of problems. People become very tribal and it’s just not an ideal discourse, let’s say. How likely is it do you think that the long reflection will end up looking more like that? And is that okay? What do you think about that?

Ord then gives a lengthy answer, with the following portion the most directly responsive:

Toby Ord: . . . I think that the political discourse these days is very poor and definitely doesn’t live up to the kinds of standards that I loftily suggest it would need to live up to, trying to actually track the truth and to reach a consensus that stands the test of time that’s not just a political battle between people based on the current levels of power today, at the point where they’ll stop fighting, but rather the kind of thing that you expect people in a thousand years to agree with. I think there’s a very high standard and I think that we’d have [to] try very hard to have a good public conversation about it.

Initial response: Ooh, there's a new 80k episode?! And it's with Toby Ord?! [visibly excited, rushes to phone]

Secondary response: Thanks for sharing that! Sounds like, as hoped, his book will provide and prompt a more detailed discussion of this idea than there's been so far. I look forward to gobbling that up.

Thanks for collecting these!

The GPI Agenda mentions "Greg Lewis, The not-so-Long Reflection?, 2018" though as of six months ago that piece was in draft form and not publicly available.

“Life can only be understood backwards; but it must be lived forwards.” ― Søren Kierkegaard

Source: https://frasimondo.com/frasi-bellissime/

I'm slightly confused about the long reflection.

I understand it involves "maybe <...> 10 billion people, debating and working on these issues for 10,000 years". And *only after that* can people consider actions which may have a long term impact on humanity.

How do we ensure that

(a) everyone gets involved with working on these issues? (presumably some people are just not interested in thinking about this? Getting people to work on things they're unsuited for seems unhelpful and unpleasant)

(b) Actions that could have a long term impact on humanity could be taken unilaterally. How could people be stopped from doing that?

I think a totalitarian worldwide government could achieve this, but I assume that's not what is intended

On (b): The first thing to note is that the Long Reflection doesn't require stopping any actions "that could have a long term impact", and certainly not stopping people considering such actions. (I assume by "consider" you meant "consider doing it this year", or something like that?)

It requires stopping people taking actions that we're not yet confident won't turn out to have been major, irreversible mistakes. So people could still do things we're already very confident are good, or things that are relatively minor.

Some good stuff from The Precipice on this, mainly from footnotes:

The ultimate aim of the Long Reflection would be to achieve a final answer to the question of which is the best kind of future for humanity. [...]
We would not need to fully complete this process before moving forward. What is essential is to be sufficiently confident in the broad shape of what we are aiming at before taking each bold and potentially irreversible action - each action that could plausibly lock in substantial aspects of our future trajectory.

Also:

We might adopt the guiding principle of minimising lock-in. Or to avoid the double negative, of preserving our options.
[Endnote:] Note that even on this view options can be instrumentally bad if they would close off many other options. So there would be instrumental value to closing off such options (for example, the option of deliberately causing our own extinction). One might thus conclude that the only thing we should lock in is the minimisation of lock-in.
This is an elegant and reasonable principle, but could probably be improved upon by simply delaying our ability to choose such options, or making them require a large supermajority (techniques that are often used when setting up binding multiparty agreements such as constitutions and contracts). That way we help avoid going extinct by accident (a clear failing of wisdom in any society), while still allowing for the unlikely possibility that we later come to realise our extinction would be for the best.

Also:

There may yet be ethical questions about our longterm future which demand even more urgency than existential security, so that they can’t be left until later. These would be important to find and should be explored concurrently with achieving existential security.

Somewhat less relevant:

Protecting our potential (and thus existential security more generally) involves locking in a commitment to avoid existential catastrophe. Seen in this light, there is an interesting tension with the idea of minimising lock-in (here [link]). What is happening is that we can best minimise overall lock-in (coming from existential risks) by locking in a small amount of other constraints.
But we should still be extremely careful locking anything in, as we might risk cutting off what would have turned out to be the best option. One option would be to not strictly lock in our commitment to avoid existential risk (e.g. by keeping total risk to a strict budget across all future centuries), but instead to make a slightly softer commitment that is merely very difficult to overturn. Constitutions are a good example, typically allowing for changes at later dates, but setting a very high bar to achieving this.

With this in mind, we can tweak your question to "Some actions that could turn out to be major, irreversible mistakes from a the perspective of the long-term future could be taken unilaterally. How could people be stopped from doing that during the Long Reflection?"

This ends up being roughly equivalent to the question "How could we get existential risk per year low enough that we can be confident of maintaining our potential for the entire duration of the Long Reflection (without having to take actions like locking in our best guess to avoid being preempted by something worse)?"

I don't think anyone has a detailed answer to that. But one sort-of promising thing is that we may have to end up with some decent ideas of answers to that in order to just avoid existential catastrophe in the first place. I.e., conditional on humanity getting to a Long Reflection process, my credence that humanity has good answers to those sorts of problems is higher than my current credence on that matter.

(This is also something I plan to discuss a bit more in those upcoming(ish) drafts.)

I think being left slightly confused about the long reflection after reading these quotes is quite understandable. These quotes don't add up to a sufficiently detailed treatment of the topic.

Luckily, since I posted this, Toby Ord gave a somewhat more detailed treatment in Chapter 7 of The Precipice, as well as in his 80k interview. These sources provide Ord's brief thoughts on roughly the questions you raise. Though I still think more work needs to be done here, including on matters related to your question (b). I've got some drafts coming up which will discuss similar matters, and hopefully MacAskill's book on longtermism will go into more detail on the topic as a whole.

On (a): I don't think everyone should be working on these questions, nor does Ord. I'd guess MacAskill doesn't, though I'm not sure. He might mean something like "the 10 billion people interested and suited to this work, out of the 20+ billion people alive per generation at that point", or "this is one of the major tasks being undertaken by humanity, with 10 billion people per generation thus contributing at least indirectly, e.g. by keeping the economy moving".

I also suspect we should, or at least will, spend under 10,000 years on this (even if we get our act together regarding existential risks).

Ord writes in The Precipice:

It is unclear [exactly how] long such a period of reflection would need to be. My guess is that it would be worth spending centuries (or more) before embarking on major irreversible changes to our future - committing ourselves to one vision or another. This may sound like a long time from our perspective, but life and progress in most areas would not be put on hold. Something like the Renaissance may be a useful example to bear in mind, with intellectual projects spanning several centuries and many fields of endeavour. If one is thinking about extremely longterm projects, such as whether and how we should settle other galaxies (which would take millions of years to reach), then I think we could stand to spend even longer making sure we are reaching the right decision.
but just thought that slavery was a pre-condition for some people having good things in life. Therefore, it was justified on those grounds.

Rot13

Gung vf pyrneyl n centzngvp qrpvfvba onfrq ba gur fbpvrgl ur jnf va. Svefgyl, gur fynirel nf cenpgvfrq va napvrag Terrpr jnf bsgra zhpu yrff pehry guna pbybavny fynirel. Tvira gung nyybjvat gur fynir gb znxr gurve bja jnl va gur jbeyq, rneavat zbarl ubjrire gurl fnj svg, naq gura chggvat n cevpr ba gur fynirf serrqbz jnf pbzzba cenpgvpr, gung znxrf fbzr cenpgvprf gung jrer pnyyrq fynirel bs gur gvzr ybbx abg gung qvssrerag sebz qrog.


Frpbaqyl, ur whfgvslf vg ol bgure crbcyr univat avpr guvatf, vs fubja gur cbjre bs zbqrea cebqhpgvba yvarf sbe znxvat avpr guvatf jvgubhg fynirel, ur jbhyq unir cebonoyl nterrq gung gung jnf n orggre fbyhgvba. Rira zber fb vs fubja fbzr NV anabgrpu gung pbhyq zntvp hc nal avpr guvat.

Guveqyl, zbfg fpvragvfgf hagvy gur ynfg srj uhaqerq lrnef jrer snveyl ryvgr fbpvnyyl. Zrzoref bs fbzr hccre pynff jub pbhyq nssbeq gb rkcrevzrag engure guna jbex. Tvira gur ynetr orarsvg gurl unq, guvf ybbxf yvxr n zhpu ynetre fbhepr bs hgvyvgl guna gur qverpg avprarff bs univat avpr guvatf.
V qba'g guvax gur qrpvfvba ur znqr jnf haernfbanoyr, tvira gur fbpvny pbagrkg naq vasbezngvba ninvynoyr gb uvz ng gur gvzr.

Why Rot13? This seems like an interesting discussion to be had

The rot13 is to make it harder to search for. I think that this is a discussion that would be easy to misinterpret as saying something offensive.

(Redundant comment)

[This comment is no longer endorsed by its author]Reply