Quadratic Reciprocity

130Joined Jul 2022

Comments
25

Besides (or after) doing his AGI Safety Fundamentals Program (and the potential future part 2 / advanced version of the curriculum), what does he recommend university students interested in AI safety do? 

If we keep focusing on students we won't be able to fix the mentorship gap. At the moment we have lots of people looking for guidance and very few people able to provide the right level of support.

 

I'd love to get more thoughts on this. In my model, a lot of things for which there is a mentorship gap, the lack is of mentors with very relevant-to-EA kinds of experience - how to successfully run an EA-aligned organisation, knowledge of AI alignment, research taste working on EA projects, and that it is not significantly more difficult for a smart young person to skill up in these things than an older professional. The flexibility that students have in changing their focus comes in really handy here. 

There are other things like how to get certain types of policy careers or climb some non-EA ladders in general that don't fit the above. And outreach to people with that knowledge and experience seems valuable. 

Are there examples of typical bad takes you've seen newer EAs post? 

I love this post.

Another suggestion would be for people hiring young people to fight fires and do other projects to be clear about what skills they'd be learning from it and the downsides. I've found it helpful in the past when someone has pointed out that although they think it would be really impactful for me to help them out with a particular project, they weren't sure if I would develop the skills from it they thought I wanted to learn compared to my alternatives. 

When I first got involved in the community-building bubble, it was very difficult for me to say no to things because everything felt impactful and the people suggesting I do things / help with particular things were friends and mentors I wanted to prove myself to. 

I think there's probably not that much we'd disagree on about what people should be doing and my comment was more of a "feeling/intuitions/vague uncomfortableness" thing rather than anything well-thought out because of a few reasons I might flesh out into something more coherent at some point in the future. 

I used the word "icky" to mean "this makes me feel a bit sus because it could plausibly be harmful to push this but I'm not confident it is wrong".  I also think it is mostly harmful to push it to young people who are newly excited about EA and haven't had the space to figure out their own thoughts on deferring, status, epistemics, cause prio etc. 

For this reason, this:

I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.

feels a little bit icky to me. That there are many people who get introduced to EA through very different ways and learn about it on their own or via people who aren't very socially influenced by the Berkeley community is an asset. One way to destroy a lot of the benefit of geographic diversity would be to get everyone promising to hang out in Berkeley and then have their worldview be shaped by that. 

From https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future#Thoughts_on_the_balance_of_positive_and_negative_value_in_current_lives:

I feel like I basically have no idea, but if I had to guess I’d say ~40% of current human lives are net-negative, and the world as a whole is worse than nothing for humans alive today because extreme suffering is pretty bad compared to currently achievable positive states. This does not mean that I think this trend will continue into the future; I think the future has positive EV due to AI + future tech.

I share these intuitions and it is a huge part of the reason reducing x-risk feels so emotionally compelling to me. It would be so sad for humanity to die out so young and unhappy never having experienced the awesome possibilities that otherwise lie in our future. 

Like, the difference between what life is like and the sorts of experiences we can have right now and just how good life could be and the sorts of pleasures we could potentially experience in the future, is so incredibly massive

Also feeling like the only way to make up for all the suffering people experienced in the past and are experiencing now and the suffering we inflict on animals is to fill the universe with good stuff. Create so much value and beautiful experiences and whatever else is good and positive and right, so that things like disease, slavery, the torture of animals seem like a distant and tiny blot in human history.

This is an example of what I meant: https://www.lesswrong.com/posts/aan3jPEEwPhrcGZjj/nate-soares-life-advice

I wish there were more nudges to make posts like that after EAGs
 

Load More