frances_lorenz

Currently working as Operations Manager at aisafetysupport.org. We try to support early-career AI Safety researchers (e.g. we offer AI Safety-specific career advising) and those already working in the field.

You can contact me on twitter or by email:

Twitter: https://twitter.com/frances__lorenz

Email: frances@aisafetysupport.org

Feel free to message me any time, I very much love chatting :D (though I'm not always the timeliest responder).

Topic Contributions

Comments

DeepMind’s generalist AI, Gato: A non-technical explainer

Oh that's totally okay, thanks for clarifying!! And good to get more feedback because I was/am still trying to collect info on how accessible this is

DeepMind’s generalist AI, Gato: A non-technical explainer

this is really good to know, thank you!! I'm  thinking we hit more of a 'familiar with some technical concepts/lingo' accessibility level rather than being accessible to people who truly have no/little familiarity  with the field/concepts. 

Curious if that seems right or not (maybe some aspects of this post are just broadly confusing). I was hoping this could be accessible to anyone so will have to try and hit that mark better in the future.

EA is more than longtermism

Luke, thank  you for always being  so kind :)) I very much appreciate you sharing your thoughts!!

"sometimes people exclude short-term actions because it's not 'longtermist enough'"
That's a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There's some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I've talked to some who think addressing these issues first is key in building towards alignment. I'm  not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces.

Glad you brought this up as it's definitely important to field/community building.

EA is more than longtermism

Do  you think that's a factor of: how many places you could apply for longtermist vs. other cause area funding? How high the bar is for longtermist ideas vs. others? Something  else?

EA is more than longtermism

Thank you, I really appreciate the breadth of this list, it gives me a much stronger picture of the various ways a longtermist worldview is being promoted.

Career Advice: Philosophy + Programming -> AI Safety

Yeah, absolutely! Happy to go through posts offering career advice, we can how one might implement the advice, if there are any other perspectives to consider, etc.

I would really encourage having a low-bar for sending people our way, very happy to talk to anyone! But generally, we offer coaching to those trying to get into the AI Safety field (ex. undergrads looking for research positions, software engineers or research scientists looking for work in the field, independent researchers or community-builders interested in applying for funding). Also happy to talk people through AI Safety career-related decisions (ex. whether or not to go to graduate school, choosing between positions, etc.)

Career Advice: Philosophy + Programming -> AI Safety

This is great advice :) Already mentioned below; however, for people in similar positions, please do consider booking a coaching call with AI Safety Support: https://www.aisafetysupport.org/. We have experience helping people navigate the AI Safety field and can also connect you to others.