Epistemic status: Just a thought that I have, nothing too rigorous
The reason Longtermism is so enticing (to me at least), is that the existence of so many future life hangs in the balance right now. It just seems to be a pretty good deed to me, to bring 10^52 people (or whatever the real number will turn out to be) into existence.
This hinges on the belief that Utility scales linearly with the number of QUALYs, so that twice as many people are also twice as morally valuable. My belief in this was recently shaken by this thought experiment:
You are a traveling EA on a trip to St. Petersburg. In a dark alley, you meet a Demon with the ability to create Universes and a serious gambling addiction....
Most of us have a problem with motivation. Effective Altruism suggests that we can do substantial good through acting altruistically. Yet, it is hard for an individual to motivate himself to act as well as reason requires.
A normative value judgement about how best to act is a separate matter from the psychological question of motivation. Effective Altruism answers the normative question of what to do but perhaps has given less attention to motivation.
Effective Altruism reasons from a perspective that values sentient welfare impartially and suggests that it is normatively good for individuals to make substantial donations to effective charities, be vegan and direct their careers towards the common good. We believe these actions are normatively good from the point of view of the...
UCLA EA ran an AI timelines retreat for community members interested in pursuing AI safety as a career. Attendees sought to form inside views on the future of AI based on an object-level analysis of current AI capabilities.
We highly recommend other university groups hold similar <15 person object-level-based retreats. We tentatively recommend other organizers hold AI timelines retreats, with caveats discussed below.
Most people in the world do not take AI risk seriously. On the other hand, some prominent members of our community believe we have virtually no chance of surviving this century due to misaligned AI. These are wild-seeming takes with massive implications. We think that assessing AI risk should be a serious and thoughtful endeavor. We sought...
The waste of human resources caused by poor selection procedures should pain the professional conscience of I–O psychologists.
Short Summary: I think that EA organizations can do hiring a lot better. If you are involved in hiring, you should do these three things: avoid unstructured interviews, train your interviewers to do structured interviews, and have a clear idea of what your criteria are before you start to review and filter applicants. Feel free to jump to the “Easy steps you should implement” section if you don’t want to go through all the details.
If we view a hiring process as an attempt to predict which applicant will be successful in a role, then we want to be able to predict this as accurately as we can. That is what this...
This year I’ve started using 3 remote personal/executive assistants for my work projects. Our remote assistants have been awesome and super useful, so I thought it would be useful to try and write a guide to help others get started with using remote assistants.
If working with a remote assistant doesn’t work out for you I think you’ll lose around £300 and 12 hours of your time in 1 month. But if it does work well, then I think you have a lot to gain - my estimate is my assistants save me around 20-30 hours a month.
I have previously encountered EAs who have beliefs about EA communication that seem jaded to me. These are either, “Trying to make EA seem less weird is an unimportant distraction, and we shouldn’t concern ourselves with it” or “Sounding weird is an inherent property of EA/EA cause areas, and making it seem less weird is not tractable, or at least not without compromising important aspects of the movement.” I would like to challenge both of these views.
As Peter Wildeford explains in this LessWrong post: