I'm an undergraduate at University College Dublin studying computational social science. I'm also an organizer for EA Ireland and do research at SoGive!
Contact me for any reason through Twitter, email, or LinkedIn :)
Yes, this is true and very important. We should by no means lose sight of existential risks as a discerning principle! I think the best framing to use will vary a lot case-by-case, and often the one you outline will be the better option. Thanks for the feedback!
Oh, I like this idea! And love WaitButWhy.
This is a good point, and I thought about it when writing the post—trying to be persuasive does carry the risk of ending up flatteringly mischaracterizing things or worsening epistemics, and we must be careful not to do this. But I don't think it is doomed to happen with any attempts at being persuasive, such that we shouldn't even try! I'm sure someone smarter than me could come up with better examples than the ones I presented. (For instance, the example about using visualizations seems pretty harmless—maybe attempts to be persuasive should look more like this than the rest of the examples?)
Hm, yeah, I see where you're coming from. Changed the phrasing.
No, that's not what I mean. I mean we should use other examples of the form "you ask an AI to do X, and the AI accomplishes X by doing Y, but Y is bad and not what you intended" where Y is not as bad as an extinction event.
Much of SoGive's methodology is outlined on this blog, which I think is pretty accessible for beginners (but I think some parts are out of date)
That's a good idea! I think the post you saw might have been this one
Do you work with Kat Woods? She mentioned some people on her team had already done some work on this and was meaning to put me in touch with them
That's amazing! Yes, I definitely think we can work together. Do you have an email or similar where I can reach out to discuss further?
I think many of these benefits could be achieved by local EA groups working on a high-impact project together (maybe like those in Impact CoLabs?). Some people in my local EA group have started working on AI research together and that seems to be going pretty well. I worry EA groups doing community service in an official EA capacity may muddy the waters about what effective altruism stands for.