Czynski

Wiki Contributions

Comments

FLI launches Worldbuilding Contest with $100,000 in prizes

Even a slow takeoff! If there is recursive self-improvement at work at all, on any scale, you wouldn't see anything like this. You'd see moderate-to-major disruptions in geopolitics, and many or all technology sectors being revolutionized simultaneously.

This scenario is "no takeoff at all" - advancement happening only at the speed of economic growth.

FLI launches Worldbuilding Contest with $100,000 in prizes

A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge's Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.

FLI launches Worldbuilding Contest with $100,000 in prizes

These goals are not good goals.

  • Encourage people to start thinking about the future in more positive terms.

It is actively harmful for people to start thinking about the future in more positive terms, if those terms are misleading and unrealistic. The contest ground rules frame "positive terms" as being familiar, not just good in the abstract - they cannot be good but scary, as any true good outcome must be. See Eutopia is Scary:

We, in our time, think our life has improved in the last two or three hundred years.  Ben Franklin is probably smart and forward-looking enough to agree that life has improved.  But if you don't think Ben Franklin would be amazed, disgusted, and frightened, then I think you far overestimate the "normality" of your own time.

  • Receive inspiration for our real-world policy efforts and future projects to run / fund.

It is actively harmful to take fictional evidence as inspiration for what projects are worth pursuing. This would be true even if the fiction was not constrained to be unrealistic and unattainable, but this contest is constrained in that way, which makes it much worse.

  • Identify potential collaborators from outside of our existing network.

Again, a search which is specifically biased to have bad input data is going to be harmful, not helpful.

  • Update our messaging strategy.

Your explicit goal here is to look for 'positive', meaning 'non-scary', futures to try to communicate. This is lying - no such future is plausible, and it's unclear any is even possible in theory. You say

not enough effort goes into thinking about what a good future with (e.g.) artificial general intelligence could look like

but this is not true. Lots of effort goes into thinking about it. You just don't like the results, because they're either low-quality (failing in all the old ways utopias fail) or they are high-quality and therefore appropriately terrifying.

The best result I can picture emerging from this contest is for the people running the contest to realize the utter futility of the approach they were targeting and change tacks entirely. I'm unsure whether I hope that comes with some resignations, because this was a really, spectacularly terrible idea, and that would tend to imply some drastic action in response, but on the other hand I'd hope FLI's team is capable from learning from its mistakes better than most.

FLI launches Worldbuilding Contest with $100,000 in prizes

This project will give people an unrealistically familiar and tame picture of the future.  Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad

This is harmful, not helpful; it is damaging, not improving, the risk messaging; endorsing any such view of the future is lying. At best it's merely lying to the public - it runs a risk of a  much worse outcome, lying to yourselves


The ground rules present a very narrow target. Geopolitical constraints state that the world can't substantially change in form of governance or degree of state power. AI may not trigger any world-shaking social change. AGI must exist for 5+ years without rendering the world unrecognizable. These constraints are (intentionally, I believe) incompatible with a hard takeoff AGI, but they also rule out any weaker form of recursive self-improvement. This essentially mandates a Hansonian view of AI progress.

I would summarize that view as 

  • Everything is highly incremental
  • Progress in each AI-relevant field depends on incorporating insights from disparate fields
  • AI progress consists primarily of integrating skill-specialized modules
  • Many distinct sources develop such modules and peer projects borrow from each other
  • Immediately following the first AGI, many AGIs exist and are essentially peers in capability
  • AI progress is slow overall

This has multiple serious problems.
One, it's implausible in light of the nature of ML progress to date; most significant achievements have all come from a single source, DeepMind, and propagated outward from there.
Two, it doesn't lead to a future dominated by AGI - as Hanson explicitly extrapolated previously, it leads to an Age of Em, where uploads, not AGI, are the pivotal digital minds.

Which means that a proper prediction along these lines will fail at the stated criteria, because

Technology is advancing rapidly and AI is transforming the world sector by sector.

will not be true - AI will not be transformative here.

With all that in mind, I would encourage anyone making a submission to flout the ground rules and aim for a truly plausible world. This would necessarily break all three of

  • The US, the EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.

since those all require a geopolitical environment which is similar to the present day. It would probably also have to violate

  • Technology is advancing rapidly and AI is transforming the world sector by sector.

If we want a possible vision of the future, it must not look like that.

  1. ^

    I am quoting this from somewhere, probably the Sequences, but I cannot find the original source or wording.

Comments for shorter Cold Takes pieces

Very few people actually want to wirehead. Pleasure center stimulation is not the primary thing we value. The broader point there is the complexity of value thesis

Comments for shorter Cold Takes pieces

For a realistic but largely utopic near-future setting, I recommend Rainbows End by Vernor Vinge. Much of the plot involves a weak and possibly immersion-breaking take on AGI, but in terms of forecasting a near-future world where most problems have become substantially more superficial and mild, the background  events and supporting material is very good.

[Creative Writing Contest] The Screams of Hell

Dimensional travel, in my head, but this is allegory, the details are intentionally unspecified. I worked on making the literalness more plausible without outright lying to the reader, but it's a hard needle to thread.

 

The conclusion is not as strong as I'd like, but illusion of transparency is real, so I'm leery of completely removing the didactic quality. It's much subtler than the Fable of the Dragon Tyrant already, and that one works well (though I think it would be better if it was less of an anvil-drop).

[Creative Writing Contest] The Screams of Hell

On which level? There's two intended morals here - one is the analogy to global poverty and open borders; the wonderful world is the West and Hell is the Third World. The other is the explicit one in the last sentence: what problems in the world are you missing, simply because they don't affect your life and are therefore easy to overlook? And particularly the point that it doesn't take anything special to notice - just someone without preconceptions who sees it and then refuses to look away.

The particular choice of analogy is inspired by Unsong.

Setting Community Norms and Values: A response to the InIn Open Letter

The only concrete change specified here is something you've previously claimed to already do. This is yet one more instance of you not actually changing your behavior when sanctioned.