All of SeanEngelhart's Comments + Replies

Good point! I actually had that same misunderstanding I think too!

Sorry for the incredibly late response! I think that all makes sense--thanks for sharing!

I think it also ends up depending a lot on one's particular circumstances: do you have a unique / timely opportunity to "jump in"? Do you have a clear path forward (i.e. options that you even could "jump into")? How uncertain do you feel about which path is right for you and how much would you have to reduce your uncertainty to feel satisfied?

It's funny you mention that "it is easy to come to the conclusion that one should become a full-time philosopher or global prior... (read more)

2
Simon Holm
2y
It absolutely depends on particular circumstances!  I realise now that my current impression of GPR is that it really is its own field,  i.e. that you would not spending much time dipping your toes into specific (other) cause areas. See more here: https://globalprioritiesinstitute.org/research-agenda-web-version/ But it would be more useful to talk to someone who works with it to find out more, for sure!

Hey all!

Here's a short page on vegan nutrition for anyone trying to learn more about it / get into veganism.

If someone doesn't have much prior ML experience, can they still be a TA, assuming they have a month to dedicate to learning the curriculum before the program starts?

If yes, would the TA's learning during that month be self-guided? Or would it take place in a structure/environment similar to the one the students will experience?

2
Max Nadeau
2y
It is possible but unlikely that such a person would be a TA. Someone with little prior ML experience would be a better fit as a participant.

This sounds really exciting!

I'm a bit unclear on the below point:

I think that MLAB is a good use of time for many people who don’t plan to do technical alignment research long term but who intend to do theoretical alignment research or work on other things where being knowledgeable about ML techniques is useful.

Do you mean you don't think MLAB would be a good use of time for people who do "plan to do technical alignment research long term"?

2
Max Nadeau
2y
We intended that sentence to be read as: "In addition to people who plan on doing technical alignment, MLAB can be valuable to other sorts of people (e.g. theoretical researchers)".

Thanks for this!

Does "Learning the Basics" specifically mean learning AI Safety basics, or does this also include foundational AI/ML (in general, not just safety) learning? I'm wondering because I'm curious if you mean that the things under "Learning the Basics" could be done with little/no background in ML.

1
Aaron_Scher
2y
Good question. I think "Learning the Basics" is specific to AI Safety basics and does not require a strong background in AI/ML. My sense is that the AI Safety basics and ML are slightly independent. The ML side of things simply isn't pictured here. For example, the MLAB (Machine Learning for Alignment Bootcamp) program which ran a few months ago focused on taking people with good software engineering skills and bringing them up to speed on ML. As far as I can tell, the focus was not on alignment specifically, but was intended for people likely to work in alignment. I think the story of what's happening is way more complicated than a 1 dimensional (plus org size) chart, and the skills needed might be an intersection of software engineering, ML, and AI Safety basics. 

When I first read this and some of the other comments, I think I was in an especially sensitive headspace for guilt / unhealthy self-pressure. Because of that & the way it affected me at the time, I want to mention for others in similar headspaces: Nate Soares' Replacing Guilt series might be helpful (there's also a podcast version). Also, if you feel like you need to talk to someone about this and/or would like ideas for additional resources (not sure how many I have, but at least some) please feel free to direct message me.

Do you have any examples of suggested ideas to red team? No worries if not - - just wanted to get a sense of what the suggested list will be like.

This sounds awesome! Thank you for running it! Do you expect to have additional runs of this in the future?

2
Cillian_
2y
I imagine we'll continue to run Red Team Challenge somewhere between 1-4 times per year moving forward (though this largely depends on how well the first iteration goes).

Thanks so much for posting this! Do you plan to update the forecast here / elsewhere on the forum at all? If not, do you have any recommendations for places to see high quality, up-to-date forecasts on nuclear risk?

I'm curious about your thoughts on this: hypothetically, if I were to relocate now, do you see the duration of my stay in the lower risk area as being indefinitely long? It seems unclear to me what exact signals--other than pretty obvious ones like the war ending, which I'd guess are much less likely to happen soon--would be clear green lights to move back to my original location. I'm wondering because I'm trying to assess feasibility. For my situation, it feels like the longer I'm away, the higher the cost (not specifically monetary) of the relocation.

2
Fin
2y
I personally don't think the risk is currently high enough to justify evacuation if you live in the US (I'm not sure where you're writing from). I think looking at escalations/de-escalations of conflict between nuclear powers as signals of risk makes sense. You could look at estimates like this one (https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022) or check relevant metaculus questions.  

Sorry for my very slow response!

Thanks--this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.

(I had heard about the "Question of Evidence," which I described above, from looking at a snippet of the podcast's transcript, but hadn't actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)

Hey Jonas! Good to "see" you! (If I remember right, we met at the EA Fellowship Weekend.) Here are some thoughts:

  • In addition to some of the things that have already been said (like reminding myself of the suffering in the world and trying to channel my sadness/anger/frustration/etc. into action to help), I also find it valuable to remind myself that this is an iterative process of improvement. Ideally, I'll continually get better at helping others for the rest of my life and continually improve my ideas about this topic. This feels especially helpful whe
... (read more)
2
Jonas Hallgren
3y
Great to "see" you Sean! I do remember our meeting during the conference, an interesting chat for sure.  Thank you for the long and deliberate answer, I checked out the stuff you sent and it of course sent me down a rabbit hole of EA motivation which was quite cool. Other than that it makes sense to modify my working process and goals a bit in order to get motivation from other sources than altruism. I think the two main things I take with me from the advice here is to have a more written account of why I do things but most importantly I need to get into the EA loop and I need to actively engage with the community more to get more reminders of why I do things. I'm however lazy and I will only answer you even though I enjoyed the other's answers too. 

Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.

3
AidanGoth
3y
Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this: This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration. To see you can find the Bayes' factors, note that if P(W) is our prior probability that we should give weights, P(¬W)=1−P(W) is our prior that we shouldn't, and P(W|A1) and P(¬W|A1)=1−P(W|A1) are the posteriors after argument 1, then the Bayes' factor is  P(A1|W)P(A1|¬W)=P(W|A1)P(¬W|A1)P(¬W)P(W)=P(W|A1)1−P(W|A1)1−P(W)P(W)=0.650.350.40.6≈1.24 Similarly, the Bayes' factor for the second pro is 0.750.250.350.65≈1.62.

Great point! I understand the high-level idea behind priors and updating, but I'm not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn't feel super helpful... I'm guessing you don't mean formally applying the equations, but instead doing it in a more approximate or practical way? I've heard Spencer Greenberg's description of the "Question of Evidence" (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?

1
AidanGoth
3y
Good questions! It's a shame I don't have good answers. I remember finding Spencer Greenberg's framing helpful too but I'm not familiar with other useful practical framings, I'm afraid. I suggested the Bayes' factor because it seems like a natural choice of the strength/weight of an argument but I don't find it super easy to reason about usually. The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is something that you get a bit of a feel for with practice. I would guess that this would usually be better than trying to formally apply Bayes' rule. (You could then work out your Bayes' factor as it's just a function of your prior and posterior but that doesn't seem especially useful at this point/it seems like too much effort for informal discussions.)

[Main takeaway: to some degree, this might increase the expected value of making AI safety measures performant.]

One I thought of:
Consider the forces pushing for untethered, rapid, maximal AI development and those pushing for controlled, safe, possibly slower AI development. If something happens in the future such that the "safety forces" become much stronger than the "development forces"--this could be due to some AI accident that causes significant harm, generates a lot of public attention, and leads to regulations being imposed on the development of AI... (read more)

This seems like quite solid advice to me and especially relevant in light of posts like this. It makes a lot of sense to try to "skill up" in areas that have lower barriers of entry (as long as they provide comparable or better career capital) and I like the idea that "you can go get somebody else to invest in your skilling-up process, then in a way, you're diverting money and mentorship into EA." This seems especially valuable since it both redirects resources of non-EA orgs that might've otherwise gone to skilling up people who aren't altruistically-mind... (read more)

I just read Forbes' April/May issue "A New Billionaire Everyday" and it had a blurb on Sam Bankman-Fried (I haven't been able to find the blurb online, this is just Sam's bio). Unfortunately, the blurb contained some of the classic mischaracterizations of EA--that it's all about giving money, that all that matters is cost-effectiveness calculations and quantifiable effects, etc. Granted, I might have been overly sensitive to these kinds of misrepresentations, especially since I read it quickly.

This got me thinking: does EA (maybe the CEA specifically) hav... (read more)

3
Stefan_Schubert
3y
CEA has people working on this. See, e.g. this article.

Thanks so much for this session! I found it really valuable. Specifically, the conversations on mental health and the dynamic between EA and social justice felt very relevant for me. I think this style of session could be very effective at reducing alienation/isolation within EA, underscoring its diversity, and strengthening one's sense of community, especially for those who feel like they are outsiders in some sense. I would definitely support having more of these types of sessions in the future.

Yeah, I'm in SE, but have been considering some additional fields as well. Thanks for the info!

Hi all! I'm wondering how valuable joining an honors society is in terms of job searching (in general, but also for EA-specific roles). I've received invitations from the few honors societies that seem legitimate (Phi Beta Kappa and Phi Kappa Phi) and am weighing whether I should pay the member fee for one of them. Does anyone have any knowledge / experience with this? Thanks, Sean

7
JP Addison
3y
What is your field? In software engineering I've never heard of an honors society being useful to anyone. In any case, I highly doubt it would be helpful when trying to network within EA, but note that most impactful roles for EAs involve networking outside of EA.