Hamish McDoodles

718 karmaJoined Sep 2022

Comments
49

Beaut. Thanks for the detailed feedback!

I think these suggestions make sense to implement immediately:

  • add boilerplate disclaimer about accuracy / fabrication
  • links to author pages
  • note on reading time
  • group by tags
  • "Lesswrong" -> "LessWrong"
  • The summaries are in fact generated within a Google sheet, so it does make sense to add a link to that

These things will require a bit of experimentation but are good suggestions:

  • Agree on the tone being boring. I can think of a couple of fixes:
    • Prompt GPT to be more succinct to get rid of low information nonsense
    • Prompt GPT to do bulletpoints rather than paragraphs
    • Generate little poems to introduce sections
  • Think about cross pollinating with Type III Audio

I've identified source of problem and fixed, thanks!

re: the biosecurity map

did you realise that the AIS map is just pulling all the coordinates, descriptions, etc from a google sheet

if you've already got a list of orgs and stuff it's not hard to turn it into a map like the AIS one by copying the code, drawing a new background, and swapping out the URL of the spreadsheet

oh this is a cool and useful resource

ty for the mention

This is what I meant, yeah.

There's also an issue of "low probability" meaning fundamentally different things in the case of AI doom vs supervolcanoes.

P(supervolacano doom) > 0 is a frequentist statement. "We know from past observations that supervolcano doom happens with some (low) frequency." This is a fact about the territory.

P(AI doom) > 0 is a Bayesian statement. "Given our current state of knowledge, it's possible we live in a world where AI doom happens." This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we're just confused and there's no way this could ever actually happen.

I have a masters degree in machine learning and I've been thinking a lot about this for like 6 years, and here's how it looks to me:

  • AI is playing out in a totally different way to the doomy scenarios Bostrom and Yudkowsky warned about
  • AI doomers tend to hang out together and reinforce each other's extreme views
  • I think rationalists and EAs can easily have their whole lives nerd-sniped by plausible but ultimately specious ideas
  • I don't expect any radical discontinuities in the near-term future. The world will broadly continue as normal, only faster.
  • Some problems will get worse as they get faster. Some good things will get better as they get faster. Some things will get weirder in a way where it's not clear if they're better or worse.
  • Some bad stuff will probably happen. Bad stuff has always happened. So it goes.
  • It's plausible humans will go extinct from AI. It's also plausible humans will go extinct from supervolcanoes. So it goes.

I’m paralysed by the thought that I really can’t do anything about it.

IMO, a lot of people in the AI safety world are making a lot of preventable mistakes, and there's a lot of value in making the scene more legible. If you're a content writer, then honestly trying to understand what's going on and communicating your evolving understanding is actually pretty valuable. Just write more posts like this.

Load more