Hi, I just wanted to say this is a great post! My background is in psychology and AI, so I am particularly excited about this piece and would be really excited to talk more about some of your points, especially relating to key questions that are important to investigate that may inform AI governance strategy (my current focus!)
21st Talks, a growing existential risk and EA-focused media organization with a podcast and youtube channel, is looking for a content admin/research assistant.
The role is not currently fully funded but the content admin will receive royalties.
The responsibilities include taking over the reins of strategizing potential podcast guests and setting up interviews via writing the guest directly or their PR team.
We are looking for someone who is also able to meet once a week to go over that upcoming content, and plan monthly content calendars, along with so...
Thank you for this list! I research communication within EA and a large factor in ineffective communication is neglectedness around emotive responses when discussing these topics (I call the sum of these emotive responses the 'lean out effect').
All of these readings are super useful for my research so thank you so much for striving to metabolize your experience into something that can help other community members. Feel free to reach out and schedule a 1-on-1 and thank you again!
In line with Khorton's answer, life coaching
Especially from someone within Effective Altruism, who understands one's longterm goals, having an individual to sit down with each week and talk through any and all issues is well worth the pay. When coupled with journaling, the issues that would otherwise impact one's EA work can be majorly minimized.
Even issues that may seem small or insignificant on the surface, if they are recurring, can be extremely taxing on one's sustainable EA motivation. The life coaching widely accessed within EA needs to extend ...
Hi Max,
This may be a silly question but I have been thinking quite a bit about exactly some of the points you brought up in your piece above due to a recent expansion of my EA Communication Organization requiring me to take on my own assistant.
I am curious, would you recommend finding a part-time assistant for other people involved in EA who recognize that their work could become significantly more effective with even a part-time assistant? I am thinking of how often I see responses like "So sorry for the long response, I want to prioritize this and ...
Hey! I am interested in EA communication and idea rendering. At Cornell, I have been brainstorming some work in with a knowledge system called a Zettlestracken using software that visually plots / generates connections between ideas/notes.
Perhaps the system I have started to design may be useful for this. Feel free to reach out to me if you're interested! I'll DM you my email!
Hi! Congratulations on the new position!
I was curious, MetaCulus and similar sites have been really useful recently for many EA's. Would there be anyway to create a similar system on EA here to do group sourced elicitation for possible ethical outcomes? It might super power the EA Forum's ability to make rational comments!
A wonderful piece, Avital. I am eager to write a full response after finals subside. But until then, just wanted to say that it is a fantastic piece. It gave context and responses to several major claims that I have been seeking more clarity on for quite a while. Without a doubt, it is where I'll be sending folks who are curious about defenses of Longtermism from now on. Brava!
Hi! This is a super interesting idea and I am incredibly grateful you're doing it!
I am a science communicator for existential risk and effective altruism and my audience is made up mostly of undergraduates who are interested in EA and X-Risk. I would absolutely love to let my audience know about your internship board once it is live or work and help you all out in any way I can to get information out there on it!
Furthermore, I would be super happy to have the heads of your project on the podcast for a brief chat about the obstacles young people face to getting into the field for upcoming videos on the topic!
Very interesting idea! I may be preaching to the choir but the importance of a physical space to not only work but also gather for social gatherings (official and otherwise) is a key feature to robust community building. So I am super eager to see how it turns out!
I am a student at Cornell up in Ithaca but travel to NYC once or twice a month and would love to have the ability to stop by somewhere where a ton of EA's are doing working, and chatting for a cup of coffee and a good chat! I have had the chance to meet with folks from other New York State ...
Very, very fair point, Sawyer! There's a lot left to be desired in existing AI risk communications--especially to the public/policymakers-- so any refinements are very welcome in my book. Great post!