77 karmaJoined Jan 2022


Sorted by New
· 1y ago · 1m read


Hi, would be good if you are able to elaborate on the point of Malaria being an environmental (rather than a vaccine) issue, as your title suggests, perhaps by summarising some relevant research. I have absolutely zero knowledge on this but not sure if cleanliness is the issue, or something more specific like stagnant water etc.? Some cited references/sources would also be useful further reading. :)

Skimmed through so you might have mentioned this point already.

Seemed to me like maybe rather than to understand religious ideologies better in themselves, your intention is ultimately to answer questions that help with cause prioritisation, like:

  • what beings matter
  • does values matter (e.g. suffering, happiness, QALYs)

These are questions that some EAs are already working on.

Religious ideologies may help to ask other relevant questions and provide some plausible hypothesis, but the valid answers would still need to be based on empirical evidence.

I think that's a much more tractable endeavour than arguing about which religion/belief is correct, from a theological angle, as an aim in itself.

Answer by AxbyJun 24, 202310

You may want to check out: https://effectivethesis.org/thesis-topics/

will need to scope it down to something more manageable for 4k words. You may also be able to apply for coaching with them. All the best!

Hi! Was wondering if you have any advice/thoughts on the value of a Security Studies masters (e.g. georgetown's) for a foreign national who is interested in working on US policy (such as through a think tank)?

An initial thought is that since "security" positions are usually subject to security clearance requirements, a foreign national would find it difficult to approach X-risk-related US policy from a security policy angle. Instead, we would need to approach US policy from a tech policy or health policy angle. A security studies degree, as opposed to an IR/MPP, might make applying for such positions more challenging.

Answer by AxbyMay 14, 202362

My impression is that organisations working on S-risks (what you seem to be concerned about) tend to focus on preventing suffering of digital beings/minds (e.g. CLR). If you agree with this focus, technical computer science knowledge of how digital beings might come to "suffer" seems useful; philosophical understanding of what constitutes suffering would probably be useful too. 

Would like to caveat that regardless of the cause you are interested in, almost all organisations would need some expertise in strategy, management, operations, comms etc. If your interests are in such topics, you might do better studying them. Might also want to consider taking advice from 80000 hours!

Hi Geoffrey, thanks for the question! I have amended the post to respond to your question based on what I got from the CSF staff :)

My personal view is that it is worth spending some more time to think about cause prioritisation, while at the same time building up career capital (see: https://80000hours.org/articles/career-capital/) that is robustly useful across a range of likely options. One way to narrow down your option set is to think about the type of work which you prefer: policy vs. research vs. operational vs. technical work. If you prefer policy/strategy/managerial kinds of work, a graduate degree in AI or medicine may not be required.

You might be able to get some work experience in one such type of work with your history degree, while thinking about cause prioritisation on the side. If possible, you may also want to try to find a job that is medicine/public health or AI-related, so you can also build up some knowledge of these fields while assessing your own personal interest. Grad school is one of the most useful ways to make a "career switch", so I think it would be wise not to rush the decision.

(unless you think AGI would be developed in the next 2 years or something and you should therefore aim for impact straightaway)

Answer by AxbyApr 29, 202362
  • Circulating meeting materials 1-3 days prior to facilitate deeper discussions
  • To be very clear about "what is the ask" at the start and end of every email/paper/deck of slides. Is it just for awareness, or is the recepient supposed to discuss something proposed, give suggestions or endorse it.
  • Identifying actionable followups from the notes taken at every meeting/conference/ seminar/paper
Answer by AxbyApr 24, 20231-1

Great question! I wrote a (draft) post kind of answering this recently, basically arguing that even though an AGI that is developed would converge on some instrumental goals, it would likely face powerful competing motivations against disempowering humanity/causing extinction. To note that my argument only applies to AGI 'misalignment/accident' risks, and doesn't rule out AGI misuse risks. 

Would love to hear your view on the post!

Load more