SeanEngelhart

Studied computer science at UW-Madison. In terms of career paths, AI safety, computer security (possibly for GCR reduction), computational modeling for alternative proteins, general EA-related research, and earning to give are all current options on the table and I'm trying to assess each. Beyond these areas, I have a wide range of interests within EA.

My personal website

Anonymous Feedback Form: If you have any feedback for me on anything and feel inclined to fill out this form, I would very much appreciate it! (idea credit: Michael Aird)

Comments

Weighted Pros / Cons as a Norm

Sorry for my very slow response!

Thanks--this is helpful! Also, I want to note for anyone else looking for the kind of source I mentioned, this 80K podcast with Spencer Greenberg is actually very helpful and relevant for the things described above. They even work through some examples together.

(I had heard about the "Question of Evidence," which I described above, from looking at a snippet of the podcast's transcript, but hadn't actually listened to the whole thing. Doing a full listen felt very worth it for the kind of info mentioned above.)

How do other EAs keep themselves motivated?

Hey Jonas! Good to "see" you! (If I remember right, we met at the EA Fellowship Weekend.) Here are some thoughts:

  • In addition to some of the things that have already been said (like reminding myself of the suffering in the world and trying to channel my sadness/anger/frustration/etc. into action to help), I also find it valuable to remind myself that this is an iterative process of improvement. Ideally, I'll continually get better at helping others for the rest of my life and continually improve my ideas about this topic. This feels especially helpful when I'm overwhelmed by the sheer number of difficult questions to answer and all the uncertainty that goes with them. I think this iterative mindset is particularly good for sustainability and longevity in our pursuit. This doesn't mean we can sit back and let our future selves do all the work of course--I think it's still important to actively avoid becoming complacent (again, even this point is a balance because I also don't think it's healthy or sustainable to be endlessly self-critical). But I do think it means that we should keep in mind that this is (ideally) a lifelong pursuit and that we can't--and, if some of the ideas behind patient philanthropy are true, maybe shouldn't--have all the answers or all the impact today.
  • If you can find things that are both valuable/impactful and especially interesting to you, this can also be a great way to pull in another kind of motivation. If you can't always feel emotionally driven to act altruistically, then you can pull in curiosity, desire to learn, etc. in those and other moments.
  • While cliche, I also think the idea of being "the change you want to see" is actually quite useful and profound (which is maybe why it's a cliche). When I feel mad about others' fancy and expensive houses, cars, and lifestyles in a world that contains vast amounts of preventable suffering and a seemingly endless list of pressing problems, it helps to remind myself that I can at least control my decisions about how to use my resources and aim to reduce suffering / increase well-being in the world. I can work to bring about the kind of world that I want to see. (Note: I'm not trying to claim that we can't influence others' (non-)altruistic actions with these last few sentences.)
  • I want to note that I think the judgment of others that I described above has had a negative impact on my mental health in my experience, so it seems useful for me (and maybe others who are similar) to find ways of managing this kind of judgment. If anyone feels interested in discussing this, please comment or reach out! I'd be happy to share some of my initial thoughts on potential methods for doing this.
  • I think community can be very motivating. If you're able to find others in your life who are excited about EA ideas, this might be able to boost your motivation. I imagine it would also be very energizing if you're able to introduce new people to EA and have them get really interested and excited about it. Here are some community-related links.
  • Here are some related links I found that might be useful (disclaimer: I haven't read through all of these yet):
Weighted Pros / Cons as a Norm

Is there any chance you have an example of your last suggestion in practice (stating a prior, then intuitively updating it after each consideration)? No worries if not.

Weighted Pros / Cons as a Norm

Great point! I understand the high-level idea behind priors and updating, but I'm not very familiar with the details of Bayes factors and other Bayesian topics. A quick look at Wikipedia didn't feel super helpful... I'm guessing you don't mean formally applying the equations, but instead doing it in a more approximate or practical way? I've heard Spencer Greenberg's description of the "Question of Evidence" (how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?). Are there similar quick, practical framings that could be applied for the purposes described in your comment? Do you know of any good, practical resources on Bayesian topics that would be sufficient for what you described?

What harm could AI safety do?

[Main takeaway: to some degree, this might increase the expected value of making AI safety measures performant.]

One I thought of:
Consider the forces pushing for untethered, rapid, maximal AI development and those pushing for controlled, safe, possibly slower AI development. If something happens in the future such that the "safety forces" become much stronger than the "development forces"--this could be due to some AI accident that causes significant harm, generates a lot of public attention, and leads to regulations being imposed on the development of AI--this could make AI development slower or mean that AI doesn't get as advanced as it otherwise would. I haven't read too much on the case for economic growth improving welfare, but if those arguments are true and the above scenario significantly reduces economic growth, and thus, welfare, then this could be one avenue for harm.

There are some caveats to this scenario:

  • If AI safety work goes really well, then it may not hinder AI development or performance. I'm not yet very knowledgeable on the field of AI safety, but from what I've heard, making AI safety measures performant is an area of active consideration (and possibly work) in the field. If development / performance isn't hindered and economic growth is thus unaffected, then the above scenario isn't cause for harm.
  • This scenario and line of reasoning rely on the harm from stunted economic growth outweighing the benefit of having safer AI. This is a very questionable assumption.
EA is a Career Endpoint

This seems like quite solid advice to me and especially relevant in light of posts like this. It makes a lot of sense to try to "skill up" in areas that have lower barriers of entry (as long as they provide comparable or better career capital) and I like the idea that "you can go get somebody else to invest in your skilling-up process, then in a way, you're diverting money and mentorship into EA." This seems especially valuable since it both redirects resources of non-EA orgs that might've otherwise gone to skilling up people who aren't altruistically-minded and frees up the resources of EA orgs that can now go towards developing other members of EA.

SeanEngelhart's Shortform

I just read Forbes' April/May issue "A New Billionaire Everyday" and it had a blurb on Sam Bankman-Fried (I haven't been able to find the blurb online, this is just Sam's bio). Unfortunately, the blurb contained some of the classic mischaracterizations of EA--that it's all about giving money, that all that matters is cost-effectiveness calculations and quantifiable effects, etc. Granted, I might have been overly sensitive to these kinds of misrepresentations, especially since I read it quickly.

This got me thinking: does EA (maybe the CEA specifically) have any kind of process for reviewing media messaging and trying to prevent these kinds of misrepresentations? I have no clue to what degree EA would be able to shape some of these descriptions, but I'd guess that Sam was contacted by Forbes for the article and was asked some questions. I'd imagine that this back-and-forth could have at least some effect on how EA is portrayed by Forbes (or whatever publication is covering it). So, I'm wondering if some kind of PR group within EA could be a useful resource when members of EA are contacted by the media (or whether this already exists).

Maintaining Motivation

Thanks so much for this session! I found it really valuable. Specifically, the conversations on mental health and the dynamic between EA and social justice felt very relevant for me. I think this style of session could be very effective at reducing alienation/isolation within EA, underscoring its diversity, and strengthening one's sense of community, especially for those who feel like they are outsiders in some sense. I would definitely support having more of these types of sessions in the future.

SeanEngelhart's Shortform

Yeah, I'm in SE, but have been considering some additional fields as well. Thanks for the info!

SeanEngelhart's Shortform

Hi all! I'm wondering how valuable joining an honors society is in terms of job searching (in general, but also for EA-specific roles). I've received invitations from the few honors societies that seem legitimate (Phi Beta Kappa and Phi Kappa Phi) and am weighing whether I should pay the member fee for one of them. Does anyone have any knowledge / experience with this? Thanks, Sean

Load More