Having been involved with EA since 2015, I have taken the GWWC pledge in 2016 and have founded and led a German EA student group in 2016/17.

In 2017, I began a BA degree in Philosophy, Politics and Economics at the University of Oxford, where I got actively involved in the EA Oxford university group, which I led as Co-President for two years.

I have completed EA community building internships with EAF (2017), CEA (2018) and Charity Entrepreneurship (2019).

Between 2019 and 2021, William MacAskill, James Aung, Richard Yetter Chappell and I have created Utilitarianism.net, an introductory online textbook on utilitarianism.

Currently, I study for a master's degree in security studies at Georgetown University in Washington, DC.

Wiki Contributions


How can we make Our World in Data more useful to the EA community?

Create a page on biological weapons. This could include, for instance,

  1. An overview of offensive BW programs over time (when they were started, stopped, funding, staffing, etc.; perhaps with a separate section on the Soviet BW program)
  2. An overview of different international treaties relating to BW, including timelines and membership over time (i.e., the Geneva Protocol, the Biological Weapons Convention (BWC), Australia Group, UN Security Council Resolution 1540)
  3. Submissions of Confidence-Building Measures in the BWC over time (including as a percentage of the # of BWC States Parties and split in publicly-accessible and restricted-access) 
  4. A graph that visually compares the funding and # of staff in international organizations for the bioweapons regime compared to chemical and nuclear weapons (e.g., the BWC Implentation Support Unit compared to OPCW for chemical and the IAEA and CTBTO PrepCom for nuclear)
  5. (Perhaps include an overview on the global proliferation of high-biosafety labs, e.g. see Global Biolabs)
  6. (Perhaps include a section on how technological advancements may affect the BW threat, e.g., include a graph on the Carlson curve (Moore's law but for DNA sequencing))
One-year masters degrees related to biosecurity?

For many people interested in but not yet fully committed to biosecurity, it may make more sense to choose a more general master's program in international affairs/security and then concentrate on biosecurity/biodefense to the extent possible within their program.

Some of the best master's programs to consider to this end:

  1. Georgetown University: MA in Security Studies (Washington, DC; 2 years) 
  2. Johns Hopkins University: MA in International Relations (Washington, DC; 2 years)
  3. Stanford University: Master's in International Policy (2 years)
  4. King's College London: variety of master's programs in the War Studies Department (London) (1 year)
  5. Sciences Po: Master in International Security (Paris; 2 years; can  be combined with the KCL degree as a dual degree)
  6. ETH Zurich: MSc program in Science, Technology and Policy (Zurich)

(Note that some of these may offer little room to focus on biosecurity specifically, though they may offer other useful courses, e.g. on AI, other emerging technologies, and great power conflict)

One-year masters degrees related to biosecurity?

Georgetown University offers a 2-semester MSc in "Biohazardous Threat Agents & Emerging Infectious Diseases". Course description from the website: "a one year program designed to provide students with a solid foundation in the concepts of biological risk, disease threat, and mitigation strategies. The curriculum covers classic biological threats agents, global health security, emerging diseases, technologies, CBRN risk mitigation, and CBRN security."

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search  rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net.

Towards a Weaker Longtermism

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.

It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.

Towards a Weaker Longtermism

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.

Towards a Weaker Longtermism

Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn't place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion). 

Towards a Weaker Longtermism

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).

Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:

Occasionally there are even claims [among effective altruists] to the effect that "shaping the far future is 1030 times more important than working on present-day issues," based on a naive comparison of the number of lives that exist now to the number that might exist in the future.

I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.

It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (…) Note that these are arguments about ex ante expected value, not necessarily actual impact. (…) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.

Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:

When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.

[PR FAQ] Sharing readership data with Forum authors

Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

Load More