SeanEngelhart

Studied computer science at UW-Madison. In terms of career paths, AI safety, computer security (possibly for GCR reduction), computational modeling for alternative proteins, general EA-related research, and earning to give are all current options on the table and I'm trying to assess each. Beyond these areas, I have a wide range of interests within EA.

My personal website

Anonymous Feedback Form: If you have any feedback for me on anything and feel inclined to fill out this form, I would very much appreciate it! (idea credit: Michael Aird)

Topic Contributions

Comments

SeanEngelhart's Shortform

Hey all!

Here's a short page on vegan nutrition for anyone trying to learn more about it / get into veganism.

Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]

If someone doesn't have much prior ML experience, can they still be a TA, assuming they have a month to dedicate to learning the curriculum before the program starts?

If yes, would the TA's learning during that month be self-guided? Or would it take place in a structure/environment similar to the one the students will experience?

Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]

This sounds really exciting!

I'm a bit unclear on the below point:

I think that MLAB is a good use of time for many people who don’t plan to do technical alignment research long term but who intend to do theoretical alignment research or work on other things where being knowledgeable about ML techniques is useful.

Do you mean you don't think MLAB would be a good use of time for people who do "plan to do technical alignment research long term"?

A visualization of some orgs in the AI Safety Pipeline

Thanks for this!

Does "Learning the Basics" specifically mean learning AI Safety basics, or does this also include foundational AI/ML (in general, not just safety) learning? I'm wondering because I'm curious if you mean that the things under "Learning the Basics" could be done with little/no background in ML.

Aiming for the minimum of self-care is dangerous

When I first read this and some of the other comments, I think I was in an especially sensitive headspace for guilt / unhealthy self-pressure. Because of that & the way it affected me at the time, I want to mention for others in similar headspaces: Nate Soares' Replacing Guilt series might be helpful (there's also a podcast version). Also, if you feel like you need to talk to someone about this and/or would like ideas for additional resources (not sure how many I have, but at least some) please feel free to direct message me.

Apply for Red Team Challenge [May 7 - June 4]

Do you have any examples of suggested ideas to red team? No worries if not - - just wanted to get a sense of what the suggested list will be like.

Apply for Red Team Challenge [May 7 - June 4]

This sounds awesome! Thank you for running it! Do you expect to have additional runs of this in the future?

Samotsvety Nuclear Risk Forecasts — March 2022

Thanks so much for posting this! Do you plan to update the forecast here / elsewhere on the forum at all? If not, do you have any recommendations for places to see high quality, up-to-date forecasts on nuclear risk?

Nuclear Preparedness Guide

I'm curious about your thoughts on this: hypothetically, if I were to relocate now, do you see the duration of my stay in the lower risk area as being indefinitely long? It seems unclear to me what exact signals--other than pretty obvious ones like the war ending, which I'd guess are much less likely to happen soon--would be clear green lights to move back to my original location. I'm wondering because I'm trying to assess feasibility. For my situation, it feels like the longer I'm away, the higher the cost (not specifically monetary) of the relocation.

Weighted Pros / Cons as a Norm

Sorry for my very slow response!

Thanks--this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.

(I had heard about the "Question of Evidence," which I described above, from looking at a snippet of the podcast's transcript, but hadn't actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)

Load More