I hope you've smiled today :)
I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university.
Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was.
Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly.
I've done some RA work in AI Policy now, so I'd be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I'm on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful.
Of a much lower importance, I'm still not for sure on what cause area I'd like to go into, so if you have any information on the following, especially as to a career in it, I'd love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.
I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).
Yeah, Oscar captured this pretty well. You say that Giving What We Can is trying to change social norms, but how well is it really being achieved on the EA forum where maybe 70% or more are already familiar?
I support the aspect of creating a community around it, but I also just guess I don't really feel that from seeing emojis in other people's EA Forum profiles? I think you'd focus on other things if creating a community among givers was your goal, and to me this likely just pressures those who haven't pledged for whatever reason into taking it, which might not be the right decision.
I agree that signaling your support for good social norms is a positive thing though, and I feel differently about when this is used on LinkedIn for example. I just don't think these abstract benefits you point to actually cash out when adding the orange emoji to forum profiles.
I honestly don't like seeing it on the forum. It has a virtue singaly sort of feel to me, I guess because I see it's potential for impact as someone who doesn't know about the pledge saying "oh, what's that orange thing all about" and then reading up on it when they wouldn't have otherwise, and I doubt there's many people on the forum who fit that bill.
As a random datapoint, I'm only just getting into the AI Governance space, but I've found little engagement with (some) (of[1]) (the) (resources) I've shared and have just sort of updated to think this is either not the space for it or I'm just not yet knowledgeable enough about what would be valuable to others.
I was especially disappointed with this one, because this was a project I worked on with a team for some time, and I still think it's quite promising, but it didn't receive the proportional engagement I would have hoped for. Given I optimized some of the project for putting out this bit of research specifically, I wouldn't do the same now and would have instead focused on other parts of the project.
This is a solid data point so thanks for mentioning it. It is maybe worth mentioning that, as much as Emile and you may be "critical of EA", Emile was formerly quite friendly and you and I are having this conversation on the forum.
I think you're likely both "more EA" than the average person, and definitely more EA than the average detractor that I have in mind. What it means to "be EA" is amorphous and uncertain here, but many people who would consider themselves EAs are also critical of it sometimes.
I'd be interested to see how much Timnit donates, or any of those who wrote the typical SBF articles, but I highly doubt their numbers would look like those above.
This was an absolutely beautiful read, thank you so much for taking the time to write it. I wrote something myself I just put out with some similar thoughts, and just wanted to mark that I found it remarkable to read this after writing it, finding much of the wisdom already contained here.
Thanks Mel :)