AndreFerretti

Student of questions @ Quizmanity.org
Working (0-5 years experience)

Bio

I create quizzes on humanity’s biggest problems  (e.g., global health, animal welfare, existential risk from AI, pandemics & nuclear winter).

How others can help me

Give feedback on my new Long-Term Future quiz, I can share the Google Doc

Comments
60

Interesting! One idea they could expand on is that spreading to other stars would mean that the probe we send could later come back to kill us all. Basically, "humans" or probes on other stars would evolve differently from us, and it would take crazy long periods of time to communicate with them. It would be near impossible to coordinate an interstellar civilization, even with light-speed travel.

I recently searched "solar sails" on Youtube and saw no Kurzgesagt-like animation on the topic. It could be an interesting idea!

Great idea, I'm curious to know how it goes! :)
Best, André

Thanks for the analysis! After listening to many students: what would you do as Superman in 24h?

Hey, I’m going to Web Summit in Lisbon next week. Not sure if they’re still selling tickets, but it’s a 70,000-people conference and the list of speakers is impressive: https://websummit.com/speakers

Thanks for the links, Rodeo. I appreciate your effort to answer my questions. :)

I can add the number of concerned AI researchers in an answer explanation - thanks for that! 

I have a limited amount of questions I can fit into the quiz, so I would have to sacrifice other questions to include the one on HLMI vs. transformative AI. Also, it seems that Holden's transformative AI timeline is the same as the 2022 expert survey on HLMI (2060). So I think one timeline question should do the trick. 

I'm considering just writing "Artificial General Intelligence," which is similar to HLMI, because it's the most easily recognizable term for a large audience.

Hey Rodeo, glad you enjoyed the three quizzes! 

Thank you for your feedback. I'll pass it to Guided Track, where I host the program. For now, there's a completion bar at the top, but it's a bit thin and doesn't have numbers. 

I saw that you work in AI Safety, so maybe you can help me clear two doubts: 

  • Do AI expert surveys still predict a 50% chance of transformative AI by 2060? (a "transformative AI" would automate all activities needed to speed up scientific and technological progress).
  • Is it right to phrase the question above as "transformative AI"? Or should I call it AGI and give it a different definition? I took the "transformative AI" and the 2060 timeline from Holden Karnofsky.
Load More