Thanks for writing this! The fact that it highlights a premise in EA ("some ways of doing good are much better than others") that a lot of people (myself included) take without very careful consideration makes me happy that it's been written.
Having said that, I am not sure that I believe this more generally because of the reasoning that you give: “well if it’s true even there [in global health] where we can measure carefully, it’s probably more true in the general case”. I think this is part of my belief, but the other part is that just d...
Thank you for sharing your experience, Andy! I am truly sorry for your loss. I thought this was a really well-written post, and I really appreciate your reference to signs and connecting the dots. Framing a career change in these terms if not often done in the EA community but it feels more real and accurate and therefore, relatable.
Thanks for writing this and sharing your reflections! One additional demographic that EA VP might be able to do more to reach are older, mid-career or late-career professionals.
As someone who often feels overwhelmed by all there is to learn in Effective Altruism (and outside of EA), I appreciate this post!
Thanks Ines for this thoughtful answer! It makes me want to emphasize the aptitude-building approach more at my group.
This makes a lot of sense and thanks for sharing that post! It's certainly true that my role is to help individuals and as such it's important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines' response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc.
So let's be more specific, current existential risk reduction focuses primarily on AI risk and biosecurity. Contributing to these fields requires quite a bit of specialization and high levels of interest in AI or biotechnology — this is the first filter. Let's look at hypothetical positions DeepMind can hire for: they can absorb a lot of research scientists, some policy/strategy specialists, and a few general writers/communication specialists. DM probably doesn't hire much if any people majoring in business and management, nursing, educations, criminal ju...
This is a good option. I hadn't really considered this. And I agree that we definitely shouldn't try to deceive anyone.
I believe this primarily due to arguments in So Good They Can't Ignore You by Cal Newport that suggest that the application of skills we excel at is what leads to enjoyable work as opposed to a passion for a specific job or cause, but also because I think that community & purpose is super important for happiness and most top EA causes seem to provide both.
Thanks for writing this. I really like the idea. One thought is that this is a great activity for local EA groups to do and maybe an organizer with a particularly nice voice can lead it. At the group I help organize at Vanderbilt, there seems to be a lot of desire for activities that focus more on the altruism and feeling behind EA.
Thanks for writing this, Ashley! I really think this is important.
An idea I had is to have a series of weekend workshops that combine the content from the readings with exercises and opportunities for discussion. Maybe this could be split into three parts (ex: I. The EA Mindset II. Longtermism III. EA in the world/Putting it into Practice)
If a workshop was hosted each weekend, this might give students the ability to attend when they are available and at their own pace. It also could allow for deeper engagement by having a full day of thinking about t...
Based on this Choose-a-Provider page, there seem to be a few cheaper day 2 tests (less than £10). This one costs £1.99 but is in Park Royal, which is an hour away by public transport in , or this one is in Battersea, London and is 45 minutes away by public transport. It seems like they get booked up fast though and have less support than the Randox one.
A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent's perceptions or immediate interests). Did you find that this was true, or were there examples of this?
Another related question regards representing future generations. I feel like a democratic process encourages short-term policies for various reasons like constituent's i...
A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent's perceptions or immediate interests). Did you find that this was true, or were there examples of this?
Yes, 100%. This is one of the areas where believing EA things directly conflicts with holding elected office: you value all lives and experiences equally, but you're suppose...
Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.
Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.
Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define "patient longtermist work" as GPR and distinct from XRR, I don't see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living...
Hello! I'm here because of my interest in moral philosophy and global priorities research. If anyone is aware of one, I'd be curious to read a history of bioethics and its impact on research.