All of Anjay F's Comments + Replies

Hello! I'm here because of my interest in moral philosophy and global priorities research. If anyone is aware of one, I'd be curious to read a history of bioethics and its impact on research. 

3
Elika
1y
Welcome! I have a bioethics textbook I can give you :)
1
tjohnson
2y
Hi, Anjay! I think this page might be of use to you, regarding the history of bioethics. When you say you're after papers on the impact of bioethics on research, do you mean scientific research, or what kinds of bioethics research are out there, or on how research is conducted through, eg. policy guidance?

Thanks for writing this! The fact that it highlights a premise in EA ("some ways of doing good are much better than others")  that a lot of people (myself included) take without very careful consideration makes me happy that it's been written. 

Having said that, I am not sure that I believe this more generally because of the reasoning that you give:  “well if it’s true even there [in global health] where we can measure carefully, it’s probably more true in the general case”. I think this is part of my belief, but the other part is that just d... (read more)

Thank you for sharing your experience, Andy!  I am truly sorry for your loss. I thought this was a really well-written post, and  I really appreciate your reference to signs and connecting the dots. Framing a career change in these terms if not often  done in the EA community but it feels more real and accurate and therefore, relatable. 

Thanks for writing this and sharing your reflections! One additional demographic that EA VP might be  able to do more to reach are older, mid-career or late-career professionals. 

5
Elika
2y
You are super right and my hunch is that bringing in older professionals is extremely valuable from an impact perspective. 

Hi Tom! Thanks for writing this post. Just curious... would you consider donating to cost-effective climate charities? (e.g. Effective Environmentalism recommended ones) Seems like it could look better from an optics point of view and  fit more with longtermism, depending on your views.

As someone who often feels  overwhelmed by all there is to learn in Effective Altruism (and outside of EA), I appreciate this post! 

This makes a lot of sense and thanks for sharing that post! It's certainly true that my role is to help individuals and as such it's important to recognize their individuality and other priorities. 

I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines' response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization. 

4
Misha_Yagudin
2y
Agree; moving into "EA-approved" direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more "directly EA role"), that's great. My thinking here was especially influenced by an article about the neoliberalism community. (Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are "psychoactive," and people should be carefully exposed to them to avoid various failure modes.)

Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc. 

So let's be more specific, current existential risk reduction focuses primarily on AI risk and biosecurity. Contributing to these fields requires quite a bit of specialization and high levels of interest in AI or biotechnology — this is the first filter. Let's look at hypothetical positions DeepMind can hire for: they can absorb a lot of research scientists, some policy/strategy specialists, and a few general writers/communication specialists. DM probably doesn't hire much if any people majoring in business and management, nursing, educations, criminal ju... (read more)

Answer by Anjay FDec 28, 202110
0
0

I believe this primarily due to arguments in So Good They Can't Ignore You by Cal Newport that suggest that the application of skills we excel at is what leads to enjoyable work as opposed to a passion for a specific job or cause, but also because I think that community & purpose is super important for happiness and most top EA causes seem to provide both. 

Thanks for writing this. I really like the idea. One thought is that this is a great activity for local EA groups to do and maybe an organizer with a particularly nice voice can lead it. At the group I help organize at Vanderbilt, there seems to be a lot of desire for activities that focus more on the altruism and feeling behind EA.  

Thanks for writing this, Ashley! I really think this is important. 

An idea I had is to have a series of weekend workshops that combine the content from the readings with exercises and opportunities for discussion. Maybe this could be split into three parts (ex: I. The EA Mindset II. Longtermism III. EA in the world/Putting it into Practice)

If a workshop was hosted each weekend, this might give students the ability to attend when they are available and at their own pace. It also could allow for deeper engagement by having a full day of thinking about t... (read more)

6
ashleylin
2y
Thanks Anjay! I think this idea seems promising and definitely worth trying. Some potential pitfalls I’d probably want to design around: * Students find it difficult to commit to a thing for three consecutive weekends. (not sure how to fix this) * Students are super hyped during the 3-week period, and quickly lose interest after workshops end. Helping students set post-workshop goals / commitments, connecting them to peers and mentors for follow-up 1-on-1s, plugging them into projects, etc. could address this. * Students forget what happens in between the weeks. I think your idea of mid-week discussions and socials could be helpful here, as was Chana's suggestion for a “crash course” review.
1
Miranda_Zhang
2y
Ooh I like this a lot! Almost like blending the IF discussion with workshops. I'm going to edit my long comment to include this as an alternative about which I'm excited.
2
ChanaMessinger
2y
I had this idea too but worried that you'd need to have gone to previous ones to engage in the later ones. Maybe there's an optional crash course intro in each one? (Doesn't make me super excited, but maybe)

Based on this Choose-a-Provider page, there seem to be a few cheaper day 2 tests (less than £10). This one costs £1.99 but is in Park Royal, which is an hour away by public transport in , or this one is in Battersea, London and is 45 minutes away by public transport. It seems like they get booked up fast though and have less support than the Randox one.

1
Charles He
2y
Just so you know, basically these £10 tests  are not real prices, and part of a bait and switch. See: https://www.bbc.com/news/business-58300897 Basically, the government made a website but management/governance is hard, so the website has been gamed and spammed, to the degree that everything is noise. As another datapoint, there's no way these prices are economical. BBC press coverage seems to have improved things but it's not really resolved.

A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent's perceptions or immediate interests). Did you find that this was true, or were there examples of this? 

Another related question regards representing future generations. I feel like a democratic process encourages short-term policies for various reasons like constituent's i... (read more)

A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent's perceptions or immediate interests). Did you find that this was true, or were there examples of this?

Yes, 100%. This is one of the areas where believing EA things directly conflicts with holding elected office: you value all lives and experiences equally, but you're suppose... (read more)

Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.

Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.

Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define "patient longtermist work" as GPR and distinct from XRR, I don't see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living... (read more)

2
Alex HT
4y
1. I think I've conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.  1. (Side note: There are so many possible longtermist strategies! Any combination of (Patient,Urgent)×(Broad,Narrow)×(Trajectory Change,XRR) is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there's actually at least six other strategies) 2. This model completely neglects meta strategic work along the lines of 'are we at the hinge of history?' and 'should we work on XRR or something else?'. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I'm not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas? 3. I had s-risks in mind when I caveated it as 'safely' reaching technological maturity, and was including s-risk reduction in XRR. But I'm not sure if that's the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like 'quality increasing' than 'probability increasing'. The argument for them being 'probability increasing' is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)