Darius_Meissner

Having been involved with EA since 2015, I have taken the GWWC pledge in 2016 and have founded and led a German EA student group in 2016/17.

In 2017, I began a BA degree in Philosophy, Politics and Economics at the University of Oxford, where I got actively involved in the EA Oxford university group, which I led as Co-President for two years.

I have completed EA community building internships with EAF (2017), CEA (2018) and Charity Entrepreneurship (2019).

Between 2019 and 2021, William MacAskill, James Aung, Richard Yetter Chappell and I have created Utilitarianism.net, an introductory online textbook on utilitarianism.

Currently, I study for a master's degree in security studies at Georgetown University in Washington, DC.

Wiki Contributions

Load More

Comments

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search  rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net.

Towards a Weaker Longtermism

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.

It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.

Towards a Weaker Longtermism

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.
 

Towards a Weaker Longtermism

Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn't place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion). 

Towards a Weaker Longtermism

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).

Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:

Occasionally there are even claims [among effective altruists] to the effect that "shaping the far future is 1030 times more important than working on present-day issues," based on a naive comparison of the number of lives that exist now to the number that might exist in the future.

I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.

It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (…) Note that these are arguments about ex ante expected value, not necessarily actual impact. (…) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.

Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:

When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.

[PR FAQ] Sharing readership data with Forum authors

Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

Towards a Weaker Longtermism

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness" (https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)

Writing about my job: Research Fellow, FHI

I really appreciated the many useful links you included in this post and would like to encourage others to strive to do the same when writing EA Forum articles.

How to reach out to orgs en masse?

Happy to have you here, Linda! It sounds like you have some really important skills to offer and I wish you will find great opportunities to apply them.

AMA: The new Open Philanthropy Technology Policy Fellowship

The listed application documents include a "Short essay (≤500 words)" without further details. Can you say more about what this entails and what you are looking for?

Load More