Ines

Pursuing an undergraduate degree

Bio

Hello! I'm an undergraduate at University College Dublin studying computational social science. I'm also an organizer for EA Ireland and do research at SoGive. This summer break, I'm in Berkeley helping organize the Stanford Existential Risks Initiative!

Contact me for any reason through  Twitter, email, or LinkedIn :)

Comments
25

I've added you to a list of relevant people :)

Ability to include a poll in when you make a question post, à la Twitter! I know this feature has been suggested before, in response to which Aaron Gertler made the Effective Altruism Polls Facebook group, but it seems to have plateaued at 578 members after 2.5 years. Response rates in the forum would probably be much higher.

I think a bottleneck to this is often that having the explicit goal of trying to make the members of your EA group become friends can feel inorganic and artificial. The  activities you suggest seem like a good way of doing this in a way that doesn't feel forced, and I'll probably be using some of these ideas for EA Ireland. Thanks for writing this wholesome post up! 

Is there a newsletter or somewhere to subscribe for updates?

Yes, this is true and very important. We should by no means lose sight of existential risks as a discerning principle! I think the best framing to use will vary a lot case-by-case, and often the one you outline will be the better option. Thanks for the feedback!

Oh, I like this idea! And love WaitButWhy.

This is a good point, and I thought about it when writing the post—trying to be persuasive does carry the risk of ending up flatteringly mischaracterizing things or worsening epistemics, and we must be careful not to do this. But I don't think it is doomed to happen with any attempts at being persuasive, such that we shouldn't even try! I'm sure someone smarter than me could come up with better examples than the ones I presented. (For instance, the example about using visualizations seems pretty harmless—maybe attempts to be persuasive should look more like this than the rest of the examples?)

Hm,  yeah, I see where you're coming from. Changed the phrasing.

No, that's not what I mean. I mean we should use other examples of the form "you ask an AI to do X, and the AI accomplishes X by doing Y,  but Y is bad and not what you intended" where Y is not as bad as an extinction event.

Load More