All of AndreFerretti's Comments + Replies

Thanks for the insights! While reading your post, I noticed a lack of a summary — so I've distilled your key findings below. Feel free to add it to the original post if useful.


Unofficial Executive Summary

Clearer Thinking ran a study on 500 people exploring the relationship between anxiety and depression, which have a surprisingly high correlation (r=0.82). In short, anxiety reflects worry about potential future adversities, while depression is the feeling of not being able to experience a positive, meaningful life.

Despite these differences, anxiety and dep... (read more)

I've set up a Manifold market for each of the 12 policy ideas discussed in the post, thanks to Michael Chen's idea (Manifold uses collective wisdom to estimate the likelihood of events). You can visit the markets here and bet on whether the US will adopt these ideas by 2028. So go ahead and place your bets, because who said politics can't be a bit of a gamble?
 

Great job! The design is impressively sleek. I wish I had this dashboard a few months ago when I was coming up with questions for an AI quiz. Congratulations.

Ciao Gio, it's great tthat you're into this topic! Check out the "Suggestions" part of the post for ideas on juggling innovation and safety. Chris has a point about being careful with open-sourcing advanced AI research. Plus, it'd be great if open-source teams created and shared their alignment studies. Who knows, maybe collaborating on alignment research will lead us to the next big breakthrough in AI. ;)

"a white-collar worker alone in an office, 3 monitors full of text, a Great Wave Off Kanagawa crashes against the window, raining inside, extreme detail, bright and vibrant colours --v 4 --ar 3:2" 

This was one of my first images on Midjourney, now my prompts are much simpler :)

Great list, thanks for sharing! I'm grateful for the inspiration you've given me — I created Quizmanity.org by working on one of your ideas ;)

By the way, a cousin of the achievements ledger is the AI Safety world map, which shows all the organizations working to reduce existential risk from AI.

'The Humanity Times' is a brilliant name! I've previously designed a front page that reflects a similar concept:

1
OdinMB
1y
I just started a project along similar lines called Actually Relevant. You can see a first prototype at https://actuallyrelevant.news/. Would love to be in touch with both you and @finm to discuss it! Please send a PM if you're interested.

Very useful ! Instead of re-reading the longer explanation of value lock-in from the book, I found this brief explanation here, and it was just what I needed :)

Makes sense, thanks for your comment. You made me think that I should be more careful about the terms I use, and argue more from first principles. I'll try doing this here:

I'm concerned about the growing trend of people and social-media platforms suppressing opposing opinions. I would love a world where people are free to speak their minds without fear of cancellation. If Big Tech and the government dictate what can and cannot be said, then everyone says the same things to avoid the risk of being banned from online platforms. To advance science and maintain freedom, you need to let people express innovative and unconventional ideas, which seem crazy at first and require free speech.

Thank you for your response, Peter. Though I was overly dramatic, the point was that cancel culture harms freedom of speech, without which there is no scientific progress or democracy. Burner accounts may be a symptom of this.

4
Peter Wildeford
1y
I just don't think "cancel culture" is at all a helpful concept. I think the concept tends to take a wide spectrum of critiques - many of which are reasonable and many of which are not - and shrink them into a package that can allow all critiques to be easily dismissed. That is, I think "cancel culture" is a fully general counterargument and would encourage people to taboo the phrase.
4
Peter Wildeford
1y
I downvoted both of these because I found them overdramatic and unconstructive. But you're not cancelled by any means - lots of people (myself included) have bad takes and you are certainly very free to continue to speak on the EA Forum and I certainly won't hold this against you (or to be honest even remember it).

On the other hand, you never miss a Forum post ;)

Yes it is! I also mentioned it in the post :)

3
Lorenzo Buonanno
1y
I have no idea how I had missed it, sorry🤦

Hey Cullen! Unfortunately, this is just an image that I designed and it's not a real feature

9
Lorenzo Buonanno
1y
Do you think this is similar? https://nuclearsecrecy.com/nukemap/ 

Yes, I think the quality of the prompt is everything when it comes to output quality. You could give it one of your previous scripts and ask it to make a new one on topix X. I also found ChatGPT to be a great brainstormer. For example, you could feed it your existing video titles and ask it to suggest 5 additional topics.

Have you tried writing scripts with ChatGPT? If so, how would you rate the results?

2
Writer
1y
I've done a couple of half-hearted tries, and the results have been very distant from something I'd use for the channel. I should try again with more effort.

Interesting! One idea they could expand on is that spreading to other stars would mean that the probe we send could later come back to kill us all. Basically, "humans" or probes on other stars would evolve differently from us, and it would take crazy long periods of time to communicate with them. It would be near impossible to coordinate an interstellar civilization, even with light-speed travel.

I recently searched "solar sails" on Youtube and saw no Kurzgesagt-like animation on the topic. It could be an interesting idea!

Great idea, I'm curious to know how it goes! :)
Best, André

Thanks for the analysis! After listening to many students: what would you do as Superman in 24h?

2
Stephen Thomas
1y
This is helpful, thanks!

Hey, I’m going to Web Summit in Lisbon next week. Not sure if they’re still selling tickets, but it’s a 70,000-people conference and the list of speakers is impressive: https://websummit.com/speakers

1
elteerkers
1y
thanks! will have a look!

Thanks for the links, Rodeo. I appreciate your effort to answer my questions. :)

I can add the number of concerned AI researchers in an answer explanation - thanks for that! 

I have a limited amount of questions I can fit into the quiz, so I would have to sacrifice other questions to include the one on HLMI vs. transformative AI. Also, it seems that Holden's transformative AI timeline is the same as the 2022 expert survey on HLMI (2060). So I think one timeline question should do the trick. 

I'm considering just writing "Artificial General Intelligence," which is similar to HLMI, because it's the most easily recognizable term for a large audience.

2
QubitSwarm99
1y
Glad to hear that the links were useful! Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

Hey Rodeo, glad you enjoyed the three quizzes! 

Thank you for your feedback. I'll pass it to Guided Track, where I host the program. For now, there's a completion bar at the top, but it's a bit thin and doesn't have numbers. 

I saw that you work in AI Safety, so maybe you can help me clear two doubts: 

  • Do AI expert surveys still predict a 50% chance of transformative AI by 2060? (a "transformative AI" would automate all activities needed to speed up scientific and technological progress).
  • Is it right to phrase the question above as "transformati
... (read more)
3
QubitSwarm99
2y
I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points. * These links should be quite useful:  * 2022 Expert Survey on Progress in AI * What do ML researchers think about AI in 2022? (37 years until a 50% chance of HLMI)  * LW Wiki - AI Timelines (e.g., roughly 15% chance of transformative AI by 2036 and ~75% of AGI by 2032) * (somewhat less useful) LW Wiki - Transformative AI; LW Wiki - Technological forecasting * I don't know of any recent AI expert surveys for transformative AI timelines specifically, but have pointed you to very recent ones of human-level machine intelligence and AGI.  * For comprehensiveness, I think you should cover both transformative AI (AI that precipitates a change of equal or greater magnitude to the agricultural or industrial revolution) and HLMI. I have yet to read Holden's AI Timelines post, but believe it's likely a good resource to defer to, given Holden's epistemic track record, so I think you should use this for the transformative AI timelines. For the HLMI timelines, I think you should use the 2022 expert survey (the first link). Additionally, if you trust that a techno.-optimist leaning crowd's forecasting accuracy generalizes to AI timelines, then it might be worth checking out Metaculus as well. * the community here has an IQR forecast of (2030, 2041, 2075) for When will the first general AI system be devised, tested, and publicly announced?   * the uniform median forecast is 54% for Will there be human/machine intelligence parity by 2040? * Lastly, I think it might be useful to ask under the existential risk section what percentage of ML/AI researchers think AI safety research should be prioritized (from the survey: "The median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents

Hey Geoffrey, I'm a fan of yours on Twitter. I'm glad you liked the quiz! Have a great day :)

You forgot to add one of my favorite infographics! ;)

Cool, thanks! Who made this deck in Will’s team?

4
freedomandutility
2y
I would still disagree with you haha, but never mind.

Thanks for writing this. I think the communist ideology of "Tax good. Billionaires bad" is ridiculous.  I prefer, "Donating good.  Founding companies good. Bureaucrats bad."

I don’t think communist ideology is relevant here.

The ideologies at fault are:

  1. nationalist ideology which barely values foreign lives or interests

  2. naive statist ideology which overestimates how effective, democratic and good governments are

In fact, I think communists and European socialists are much less likely to hold these views than the mainstream American left.

Personally, I think “billionaires in general bad” too, but also “governments of rich countries, and in particular government of America, worse”.

Check Effective Crypto: https://www.effectivecrypto.org/

This update was very much needed, and congrats on the new introduction—I love it and finally have a page I can share introducing EA!

1
Clifford
2y
Great to hear Andre! :)

"It is only when you don’t care about your reputation that you tend to have a good one."
-Nassim Taleb 

Hi Bara, thank you very much for your feedback! 

Thanks for the catch on the malaria bed net :)

I think cancer deaths have been going up, not down (https://ourworldindata.org/cancer#is-the-world-making-progress-against-cancer), so maybe you meant 4M not 40M in 2015.

I don't fully understand the problem with the 'payload' point, but since I'm in doubt, and I understand that it could be a risk, I will remove it for the moment.

1
brb243
2y
yes! that's right I was off by a factor of 10 ..  ok, as you wish.

Great catch! I didn't see this deck before, will go through it now. From a first look, it seems like the key numbers deck is general, and these decks are based on the 4 EA cause areas.

I recently published six new wikiHow articles to promote EA principles: How to Make a Difference in Your Career, How to Help Farmed Animals, How to Launch a High Impact Nonprofit, How to Reduce Animal Cruelty in Your Diet,  How to Help Save a Child's Life with a Malaria Bed Net Donation, and How to Donate Cryptocurrency to Effective Charities

Some titles might change soon in case you can't find them anymore (e.g., How to Reduce Animal Cruelty in Your Diet --> How to Have a More Ethical Diet Towards Animals, and How to Help Save a Child's Life... (read more)

You mean like Animal Welfare (beginners) and Animal Welfare (advanced)? Thanks for the idea! I never thought about it. Let me know your feedback on the cards once you start revising them :)

I agree with sharing more flashcards! Let me know your feedback on the Anki cards :)

Great feedback on the longevity of flashcards, will apply it, thanks!

What a beautiful project for Open Philanthropy to sponsor! I was so happy to see my favourite YouTube channel publish this video :)

Great idea! 

On the UGAP website, there's no mention that the program is online or physical. I recommend clarifying it :)

When is the deadline for the volunteer application?

2
IanDavidMoss
2y
There is no deadline, we are always open to new volunteers :)

This morning I thought, "EA Forum posts should be shorter and simpler," and now I read your post. Thank you for helping make ideas clear to everyone, not just philosophers ;)

I need to add more examples to my writing. For example, I wrote my list of 90 mental models with no examples, so some mental models are un-understandable. 

I recommend to everyone On Writing Well by William Zinsser, which improved my writing by 50%.
I summarised the book in my writing checklist for those short on time. Feedback is welcome :)

Load more