Sorry for the incredibly late response! I think that all makes sense--thanks for sharing!
I think it also ends up depending a lot on one's particular circumstances: do you have a unique / timely opportunity to "jump in"? Do you have a clear path forward (i.e. options that you even could "jump into")? How uncertain do you feel about which path is right for you and how much would you have to reduce your uncertainty to feel satisfied?
It's funny you mention that "it is easy to come to the conclusion that one should become a full-time philosopher or global prior...
Hey all!
Here's a short page on vegan nutrition for anyone trying to learn more about it / get into veganism.
If someone doesn't have much prior ML experience, can they still be a TA, assuming they have a month to dedicate to learning the curriculum before the program starts?
If yes, would the TA's learning during that month be self-guided? Or would it take place in a structure/environment similar to the one the students will experience?
This sounds really exciting!
I'm a bit unclear on the below point:
I think that MLAB is a good use of time for many people who don’t plan to do technical alignment research long term but who intend to do theoretical alignment research or work on other things where being knowledgeable about ML techniques is useful.
Do you mean you don't think MLAB would be a good use of time for people who do "plan to do technical alignment research long term"?
Thanks for this!
Does "Learning the Basics" specifically mean learning AI Safety basics, or does this also include foundational AI/ML (in general, not just safety) learning? I'm wondering because I'm curious if you mean that the things under "Learning the Basics" could be done with little/no background in ML.
When I first read this and some of the other comments, I think I was in an especially sensitive headspace for guilt / unhealthy self-pressure. Because of that & the way it affected me at the time, I want to mention for others in similar headspaces: Nate Soares' Replacing Guilt series might be helpful (there's also a podcast version). Also, if you feel like you need to talk to someone about this and/or would like ideas for additional resources (not sure how many I have, but at least some) please feel free to direct message me.
Do you have any examples of suggested ideas to red team? No worries if not - - just wanted to get a sense of what the suggested list will be like.
This sounds awesome! Thank you for running it! Do you expect to have additional runs of this in the future?
Thanks so much for posting this! Do you plan to update the forecast here / elsewhere on the forum at all? If not, do you have any recommendations for places to see high quality, up-to-date forecasts on nuclear risk?
I'm curious about your thoughts on this: hypothetically, if I were to relocate now, do you see the duration of my stay in the lower risk area as being indefinitely long? It seems unclear to me what exact signals--other than pretty obvious ones like the war ending, which I'd guess are much less likely to happen soon--would be clear green lights to move back to my original location. I'm wondering because I'm trying to assess feasibility. For my situation, it feels like the longer I'm away, the higher the cost (not specifically monetary) of the relocation.
Sorry for my very slow response!
Thanks--this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.
(I had heard about the "Question of Evidence," which I described above, from looking at a snippet of the podcast's transcript, but hadn't actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)
Hey Jonas! Good to "see" you! (If I remember right, we met at the EA Fellowship Weekend.) Here are some thoughts:
Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.
Great point! I understand the high-level idea behind priors and updating, but I'm not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn't feel super helpful... I'm guessing you don't mean formally applying the equations, but instead doing it in a more approximate or practical way? I've heard Spencer Greenberg's description of the "Question of Evidence" (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?
[Main takeaway: to some degree, this might increase the expected value of making AI safety measures performant.]
One I thought of:
Consider the forces pushing for untethered, rapid, maximal AI development and those pushing for controlled, safe, possibly slower AI development. If something happens in the future such that the "safety forces" become much stronger than the "development forces"--this could be due to some AI accident that causes significant harm, generates a lot of public attention, and leads to regulations being imposed on the development of AI...
This seems like quite solid advice to me and especially relevant in light of posts like this. It makes a lot of sense to try to "skill up" in areas that have lower barriers of entry (as long as they provide comparable or better career capital) and I like the idea that "you can go get somebody else to invest in your skilling-up process, then in a way, you're diverting money and mentorship into EA." This seems especially valuable since it both redirects resources of non-EA orgs that might've otherwise gone to skilling up people who aren't altruistically-mind...
I just read Forbes' April/May issue "A New Billionaire Everyday" and it had a blurb on Sam Bankman-Fried (I haven't been able to find the blurb online, this is just Sam's bio). Unfortunately, the blurb contained some of the classic mischaracterizations of EA--that it's all about giving money, that all that matters is cost-effectiveness calculations and quantifiable effects, etc. Granted, I might have been overly sensitive to these kinds of misrepresentations, especially since I read it quickly.
This got me thinking: does EA (maybe the CEA specifically) hav...
Thanks so much for this session! I found it really valuable. Specifically, the conversations on mental health and the dynamic between EA and social justice felt very relevant for me. I think this style of session could be very effective at reducing alienation/isolation within EA, underscoring its diversity, and strengthening one's sense of community, especially for those who feel like they are outsiders in some sense. I would definitely support having more of these types of sessions in the future.
Hi all! I'm wondering how valuable joining an honors society is in terms of job searching (in general, but also for EA-specific roles). I've received invitations from the few honors societies that seem legitimate (Phi Beta Kappa and Phi Kappa Phi) and am weighing whether I should pay the member fee for one of them. Does anyone have any knowledge / experience with this? Thanks, Sean
Good point! I actually had that same misunderstanding I think too!