35 karmaJoined Sep 2022Seeking workEphraim, UT 84627, USA


I try to take the threat from AI seriously and I pretty much think that doom is the default scenario. 

I've known about EA for almost a year now and it has been an amazing experience to engage with. I'm pretty into the rationality/LW side of things these days. 

I recently got my BS in Biomedical Science with a minor in Economics. Not much formal work experience, but I'm pretty good at getting stuff done on my own.

How others can help me

I don't live near an EA hub and I'm not that well connected. I would love to get people's insight on things, learn about promising opportunities, and talk strategy with other EAs.

How I can help others

I'm not an expert in anything really, but I know a little about a lot and would be happy to provide input where I can. Currently, I think a lot about how to make AI go well and I am happy to red-team your plans or brainstorm with you.


Just for the sake of feedback, I think this makes me personally less inclined to post the ideas and drafts I have been toying with because it makes me feel like they are going to be completely steamrolled by a flurry of posts by people with higher status than me and it wouldn't really matter what I said.

I don't know who your target demo here is and it sounds like "flurry of posts by high status individuals" might have been your main intention anyways. However, please note, that this doesn't necessarily help you very much if you are trying to cultivate more outsider perspectives.

In any case, you're probably right that this will lead to more discussion and I am interested to see how it shakes out. I hope you'll write up a review post or something to summarize how the event went because it's going to be hard to follow that many posts about different topics and the corresponding they each generate.

I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I'm ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you're coming from?

Could you expound on this or maybe point me in the right direction to learn why this might be? 

I tend to agree with the intuition that s-risks  are unlikely because they are a small part of possibility space and that nobody is really aiming for them. I can see a risk that systems trained to produce eudaimonia will instead produce -1 x eudaimonia, but I can't see how that justifies thinking that astronomic bad is more likely than astronomic good. Surely a random sign flip is less likely than not.

Sure thing! I don't think it'll be all that polished or comprehensive since it is mostly intended to help me straighten out my reasoning, but I would be more than happy to share it. 

Thank you for the survey info! I was favorably surprised by some of those results.

Thank you so much! This is exactly the sort of thing I am looking for. I'm glad there is high quality work like this being done to advance strategic clarity surrounding TAI and I appreciate you sharing your draft.

I hadn't heard about Ayuda Efectiva, but it looks like a great introductory resource and I'll definitely send it to her. Reaching out to those groups might also be a good idea. I appreciate the help!

Hey everybody!

One of my friends is interested in learning more about EA and I am trying to find good resources to recommend to her. The thing is, her English is only so-so; her preferred language is Spanish. I found a couple websites that give brief overviews of some EA ideas, but I am having a hard time finding comprehensive EA texts in Spanish.

Does anyone know of any EA resources in Spanish that could be helpful?