I create quizzes on humanity’s biggest problems (e.g., global health, animal welfare, existential risk from AI, pandemics & nuclear winter).
Give feedback on my new Long-Term Future quiz, I can share the Google Doc
Interesting! One idea they could expand on is that spreading to other stars would mean that the probe we send could later come back to kill us all. Basically, "humans" or probes on other stars would evolve differently from us, and it would take crazy long periods of time to communicate with them. It would be near impossible to coordinate an interstellar civilization, even with light-speed travel.
I recently searched "solar sails" on Youtube and saw no Kurzgesagt-like animation on the topic. It could be an interesting idea!
Great idea, I'm curious to know how it goes! :)Best, André
Thanks for the analysis! After listening to many students: what would you do as Superman in 24h?
You’ve probably already seen it but linking it just in case: the Future Perfect 50
Vox released a week ago their Future Perfect 50 with a list of impressive people building a better future:
Hey, I’m going to Web Summit in Lisbon next week. Not sure if they’re still selling tickets, but it’s a 70,000-people conference and the list of speakers is impressive: https://websummit.com/speakers
Thanks for the links, Rodeo. I appreciate your effort to answer my questions. :)
I can add the number of concerned AI researchers in an answer explanation - thanks for that!
I have a limited amount of questions I can fit into the quiz, so I would have to sacrifice other questions to include the one on HLMI vs. transformative AI. Also, it seems that Holden's transformative AI timeline is the same as the 2022 expert survey on HLMI (2060). So I think one timeline question should do the trick.
I'm considering just writing "Artificial General Intelligence," which is similar to HLMI, because it's the most easily recognizable term for a large audience.
Hey Rodeo, glad you enjoyed the three quizzes!
Thank you for your feedback. I'll pass it to Guided Track, where I host the program. For now, there's a completion bar at the top, but it's a bit thin and doesn't have numbers.
I saw that you work in AI Safety, so maybe you can help me clear two doubts: