Having been involved with EA since 2015, I have taken the GWWC pledge in 2016 and have founded and led a German EA student group in 2016/17.

In 2017, I began a BA degree in Philosophy, Politics and Economics at the University of Oxford, where I got actively involved in the EA Oxford university group, which lead as Co-President for two years.

In the past, I have completed EA community building internships with EAF (2017), CEA (2018) and Charity Entrepreneurship (2019)

In 2019/2020, William MacAskill, James Aung and me have created Utilitarianism.net, an introductory online textbook on utilitarianism.

I went vegan and got involved in the animal rights community in 2013, before encountering EA. Since then I have come to accept longtermism and consequently changed my cause area focus to prioritise work on existential risks and trajectory changes to positively impact the lives of future generations.

For my detailed EA origin story, see: https://forum.effectivealtruism.org/posts/FA794RppcqrNcEgTC/why-are-you-here-an-origin-stories-thread#YGfD5dmhSEJe5CPHT


some concerns with classical utilitarianism

Hi Nil, thanks for linking to utilitarianism.net. Unfortunately, the website is temporarily unavailable under the .net domain due to a technical problem.  You can, however, still access the full website via this link: https://utilitarianism.squarespace.com/

Some thoughts on EA outreach to high schoolers

Brief meta comment: I would generally recommend being very cautious about (and mostly avoid) using language like "converting" others to EA, as in your sentence "Younger people might be easier to convert (...)". This type of language seems fairly easy to avoid, whiled using it may make many people feel uncomfortable and even pose reputational risks for the community.

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism

Thank you for your comment!

There is a part of me which dislikes you presenting utilitarianism which includes animals as the standard form of utilitarianism. (...) I'd prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)?

My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities' comment). I'm not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we'll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.

Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you're linking it to effective altruism websites, or use effective altruism examples?

In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don't currently see the need to e.g. add a disclaimer when we link to GiveWell's website on Utilitarianism.net, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. "Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.")

Also, what's up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?

We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that "This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion." For what it's worth, Harsanyi's version of the argument seems more plausible than Rawls' version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that "I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit."

The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don't, what's the point?).

Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.

For more information on this topic, I recommend Amanda Askell's EA Forum post "Act utilitarianism: criterion of rightness vs. decision procedure".

We should choose between moral theories based on the scale of the problem

I like the general thrust of your argument and would like to point out that within moral philosophy there is already an (in my view) satisfactory way to incorporate judgements associated with deontology and virtue ethics within a utilitarian framework—by going from “single-level utilitarianism” to “multi-level utilitarianism“:

I'm currently writing a text on this topic and will copy an excerpt here:

"Utilitarians believe that their moral theory is the appropriate standard of moral rightness, in that it specifies what makes an act (or rule, policy, etc) right or wrong. However, as Henry Sidgwick noted, “it is not necessary that the end which gives the criterion of rightness should always be the end at which we consciously aim”.

Most, if not all, utilitarians discourage the use of utilitarianism as a decision procedure to guide all their everyday actions. Using utilitarianism as a decision procedure means always calculating the expected consequences of our day-to-day actions in an attempt to deliberately try to promote overall wellbeing. For example, we might pick what breakfast cereal to buy at the grocery store by trying to determine which one best contributes to overall wellbeing. To try and do so would be to follow single-level utilitarianism, which treats the utilitarian theory as both a standard of moral rightness and a decision procedure. But using such a decision procedure for all our decisions is a bad and fruitless idea, which explains why almost no one ever defended it. Jeremy Bentham rejected it, writing that “it is not to be expected that this process [of calculating expected consequences] should be strictly pursued previously to every moral judgment.” Deliberately calculating the expected consequences of our actions is error-prone and takes a lot of time. Thus, we have reason to think that following single-level utilitarianism would itself not lead to the best consequences, which is why the theory is often criticized as “self-defeating”.

For these reasons, many advocates of utilitarianism have instead argued for multi-level utilitarianism, which is defined as follows:

Multi-level utilitarianism is the view that, in most situations, individuals should follow tried-and-tested heuristics rather than trying to calculate which action will produce the most wellbeing.

Multi-level utilitarianism implies that we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—knowing that this will lead to the best outcomes overall. To this end, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws will save time and usually lead to good outcomes, in part because they are based on society’s experience of what promotes individual wellbeing. The fact that honesty, integrity, keeping promises and sticking to the law have generally good consequences explains why in practice utilitarians value such things highly and use them to guide their everyday actions."

Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations

Thanks for writing this up! I really appreciated how you describe the problem of the competitive hiring landscape within the EA community, and especially that you connected this to a potentially increased risk of value drift for community members who grow frustrated after not being hired by their preferred employers within the community. I agree that this presents a major challenge for the EA community as a whole and would like to see more proposed solutions.

Having said all that, I also have two quarrels with your proposed solutions:

First, the EAs in academia who are in the best positions to be able to 'steer their fields' in the future are probably the ones who need this type of advice the least, because they would seem to be in the best position to be hired within the EA community. Of course, if they are in such a special position within their academic field, it might be more impactful for them to stay in academia (depending on their field) regardless of whether they could get a job at an EA org.

Second, I have found it difficult to understand from your two points about local EA groups what you wish they would change about their strategy. You advise them to work on "creating a nice and welcoming environment, where members want to come back to in regular intervals for years". However this seems like standard local group advice to me that most (all?) local groups aspire to implement anyway. (Note that this advice anyway does not really apply to EA university groups which by their very nature mostly attract students on a fairly short-term basis (~ 1-3 years).

I would be interested in your specific recommendations for how local groups could achieve this goal of long-term member engagement. Thanks!

Ask Me Anything!

I'm surprised by how much low-hanging fruit there is still left to edit Wikipedia in order to make more people aware of (and provide them with a more sophisticated understanding of) important ideas that are relevant to EA. I've been adding and improving Wikipedia content on the side for two years now, with a clear focus on articles that are related to altruism.

In my experience, editing Wikipedia is really i) easy, ii) fun, iii) there are many content gaps left to fill, and iv) it exposes the content you write to a much larger audience (sometimes several orders of magnitude larger) than if you wrote instead for a private blog or the EA Forum. Against this background, I'm surprised that not more knowledgeable EAs contribute to Wikipedia (feel free to reach out to me if you would potentially like to do just that).

A word of caution: the quality control on Wikipedia is fairly strong and it is generally disliked if people make edits that come across as ideologically-motivated marketing rather than as useful information. For this reason, I aspire to genuinely improve the quality of the article with all the edits I make, though my choice of articles to edit is informed by my altruistic values.

A useful resource on this topic is Brian Tomasik's "The Value of Wikipedia Contributions in Social Sciences".

[I'm collaborating with Will on creating the content for utilitarianism.net, but this comment is written in my private capacity]

Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift

Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:

"Choose effective altruist endeavors that also grant you selfish benefits.  There are a number of standard human motivators.  Status, friends, mates, money, fame.  When these things are on the line work actually gets done.  Without these things it’s a lot harder.  If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you.  If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it.  If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want.  We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want.  Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized.  If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose."

Is value drift net-positive, net-negative, or neither?

How bad (or possibly good) value drift and lifestyle drift are will depend your definition of the phenomenon, as you acknowledge yourself. The way I conceptualise them in the EA Forum article I wrote on the topic ('Concrete Ways to Reduce Risks of Value Drift'), makes them strongly net-negative. In the post I (briefly) make the case that reducing risks of value drift and lifestyle drift may be an altruistic top-priority.

Here's how I think about the topic:

I use the terms value drift and lifestyle drift in a broad sense to mean internal or external changes that would lead you to lose most of the expected altruistic value of your life. Value drift is internal; it describes changes to your value system or motivation. Lifestyle drift is external; the term captures changes in your life circumstances leading to difficulties implementing your values. Internally, value drift could occur by ceasing to see helping others as one of your life’s priorities (losing the ‘A’ in EA), or loosing the motivation to work on the highest-priority cause areas or interventions (losing the ‘E’ in EA). Externally, lifestyle drift could occur (as described in Joey's post) by giving up a substantial fraction of your effectively altruistic resources for non-effectively altruistic purposes, thus reducing your capacity to do good. Concretely, this could involve deciding to spend a lot of money on buying a (larger) house, having a (fancier) wedding, traveling around the world (more frequently or expensively), etc. Quoting myself:

Of course, changing your cause area or intervention to something that is equally or more effective within the EA framework does not count as value drift. Note that even if your future self were to decide to leave the EA community, as long as you still see ‘helping others effectively’ as one of your top-priorities in life it might not constitute value drift. (...)
Most of the potential value of EAs lies in the mid- to long-term, when more and more people in the community take up highly effective career paths and build their professional expertise to reach their ‘peak productivity’ (likely in their 40s). If value drift is common, then many of the people currently active in the community will cease to be interested in doing the most good long before they reach this point. This is why, speaking for myself, losing my altruistic motivation in the future would equal a small moral tragedy to my present self. I think that as EAs we can reasonably have a preference for our future selves not to abandon our fundamental commitment to altruism or effectiveness.
How to Get the Maximum Value Out of Effective Altruism Conferences

I'd guess it is common for people to underweight the expected value (EV) of attending EA Globals, because they focus on the predictable and easy-to-measure benefits of doing so. However, the EV of attending these conferences (according to my intuitive model) is dominated by 'Black Swan'-like benefits (i.e. low-probability, hard-to-predict, disproportionately-high-impact benefits). For this reason, it may be the case that even if (suppose) most EA Global attendees got little value out of the conference, there will likely be a few individuals reaping very large benefits that justify the whole event for everyone else.

These underappreciated benefits of attending EA Globals likely include: 1) starting a causal chain that will (eventually) result in a job or internship, 2) finding co-founders for highly valuable projects, 3) making new connections (or deepening existing ones) that will (eventually) provide you with substantial support (e.g. financial, advisory, emotional) or vice versa, 4) changing your mind about an empirical or philosophical crucial consideration that radically alters your priorities (e.g. by changing which cause area to focus on, or which interventions to prioritise).

To account for these potential Black Swan-like benefits when thinking about the opportunity cost of attending events such as EA Global, I deliberately attempt to follow the heuristic of asking myself: "Is this event more likely to give rise to Black Swan-like benefits compared to the best alternative use of my time?". I prioritise events that have 'Black Swan'-generating circumstances (e.g. meeting new people and organisations working on important topics, having opportunities to reflect on major life choices and philosophical beliefs, meeting smart and well-informed people who have major disagreements with my views).

Why are you here? An origin stories thread.

Given how incredibly positive I see the influence that EA has had on my own life, this post is a fantastic opportunity for me to say ‘thank you’. Thanks to all of you for your contributions to building such an awesome community around (the) ‘one thing you’ll never regret’ – altruism (I got this quote from Ben Todd). I have never before met a group of people this smart, caring and dedicated to improving the world, and I am deeply, deeply grateful that I can be a part of this.

I remember that in elementary school was the first time I was confronted with other students believing in what they referred to as ‘GOD’. Having grown up in a secular family myself, I was at first confused by their belief, and then started debating them. This went on to the point when one day I screamed insults at the sky to prove that there was no one up there listening and no lightning would strike to pulverize me. My identity started to grow, and after reading the Wikipedia article on atheism in early middle school, ‘agnostic-atheist’ was the first of a number of ‘-isms’ that I added to my identity over the years (though, as I will describe, some of these ‘-isms’ were only temporary). Unsurprisingly, when I encountered the writings and speeches of Richard Dawkins in my teens, I quickly became a staunch fan (let it be pointed out that I am more critical nowadays about his communication style and some of his content).

I can contribute my early political socialization to attending summer camps and weekend seminars of a socialist youth organisation in Germany in middle school. There, for the first time, I met people who really cared about improving the world, and I learned about social problems such as racism, sexism, homophobia, and – the mother of all problems, from the socialist perspective – capitalism. Furthering this process of ideological adaptation, I learned that the supposed solutions for these and other social problems were creating a socialist, communist or possibly anarchist world-order – if need be, by means of violent revolution. In hindsight, it’s interesting for me to look back and see that this belief in a violent revolution required an element of consequentialist thinking (along with very twisted empirical beliefs largely grounded in Marxism): to create a better society for the rest of all time, we might need to make sacrifices today and fight. I always had a great time with the other young socialists, made friends, had my first kiss, went to various left-wing protests and sat around camp fires where we sang old socialist workers’ songs. (A note on the songs: I remember how powerful and determined they would make me feel in my identity as a social-ist, connected to a cause that was larger than myself and celebrating those ‘partisans’ who were killed fighting (violently) in socialist revolutions. Hopefully, this was a lasting lesson with regards to methods of ideological indoctrination). The most long-lasting and positive effect this part of my life had on my personality, was in igniting a strong dedication to improving the world – I had found my ultimate and main goal in life (provided and hoping that won’t change again).

During my last lesson in ethics class in middle school, we (around 30 omnivore students) debated the ethics of eating animals. The (to me at the time) surprising conclusion we reached was that, in the absence of an existential necessity for humans to eat meat to survive, it was ethically wrong to raise, harm and slaughter animals. On this day, I decided to try vegetarianism. I began to look into the issue of animal farming, animal ethics, vegetarianism and veganism, and I was shocked by the tremendous suffering endured by billions of non-human animals around the world, and that I had contributed to my whole life. Greedy for knowledge, I read as much as I could about these topics. It still took me a year to decide to be vegan henceforth. I read Peter Singer’s ‘Animal Liberation’ only after I went vegan, but it certainly increased my motivational drive to dedicate my life to reducing the suffering of non-human animals – what I then perceived as the most pressing ethical problem in the world (+ the book was my first real exposure to utilitarian thought). Throughout my high school years, I would write articles about veganism for our school’s student magazine, organise public screenings of the animal-rights movie ‘Earthlings’, distribute brochures of animal rights organisations, debate other students on the ethics of eating meat and supply our school’s cafeteria with plant-based milk alternatives. Later, as part of my high school graduation exams I wrote a 40-page philosophical treaty on animal ethics.

In high school I also learned about environmental degradation – caused, of course, by evil multinationals and, ultimately, capitalism – and started caring about environmental preservation (considering myself an environmental-ist). Reasoning that changing only my own consumer behaviour would have limited effects, once again I started taking actions to affect the behaviour of others. For instance, I started a shop from my room in the boarding school, reselling environmentally-friendly products, such as recycled toilet paper, to other students (I would sell the goods at the market price, without making a profit). I also decided that after my graduation from school, I would take a gap year and go to India to volunteer for a small environmental non-profit organisation. (Perhaps unsurprisingly, in hindsight I don't think that my work as a volunteer had a big impact).

And then I attended the single most transformational event of my life: an introductory talk on effective altruism, brilliantly presented by the EA Max Kocher, who at the time interned with the predecessor organisation of what would later become the Effective Altruism Foundation. I was immediately attracted by the EA perspective on reducing animal suffering (though I remember finding the ‘risks to the far-future from emerging technologies’ part of the presentation weird). Previously, I had read a lot of stuff online written by vegans and animal rights activists, but somehow I had never come across a group of people who were thinking as rationally and strategically about achieving their ethical goals as EAs. Once again, I became greedy for knowledge, and – in reading many EA articles, books, listening to podcasts and watching talks – felt like a whole new world was opening up to me. A world that I couldn’t get enough of. And in the process of engaging with EA, I encountered a great many arguments that challenged some of my dearly held beliefs – many of which I subsequently abandoned.

Some of the major ways I changed my mind through EA include:

  • I got convinced that what ultimately counts morally are the conscious experiences of sentient beings, and thus stopped caring about ‘the environment’ for its own sake. Learning about the prevalence and magnitude of the suffering of animals living in the wild, I left behind my beliefs in environmental preservation, the protection of species over individuals, and the intrinsic importance of biodiversity.

  • The most important normative change I underwent is growing closer to hedonistic utilitarianism, and totalism in population ethics. In parallel to this process, I engaged more with arguments like Bostrom’s astronomical waste argument, and ultimately accepted the long-term value hypothesis. That said, keeping epistemic modesty in mind and the wild divergence in favoured moral theories among moral philosophers, I do attempt to take moral uncertainty seriously.

  • The most important change in my empirical worldview came with learning more about the benefits and achievements of market economies and the tremendous historical failures of its so-called socialist and communist alternatives. I stopped attributing everything that was going wrong in the world to ‘capitalism’ and adopted (what I now think of as) a much more nuanced view on the costs and benefits of adopting particular economic policies.

  • Relatedly, I became much more uncertain with regards to many political questions, due to giving up many of my former tribe-determined answers to policy questions. In particular, I have reduced my certainty in policies with strong factual disagreement among relevant experts.

After having engaged with EA intensely, though passively for more than one year in India, upon my return to Germany I was aching to get active and finally meet other EAs in person. Subsequently, I completed two internships with EAF in Berlin, started and led an EA university chapter at the University of Bayreuth, before ultimately transitioning to the University of Oxford, where I am now one of the co-presidents of EA Oxford.

The philosophy and community behind effective altruism have transformed my life in a myriad of beneficial ways. I am excited about all the achievements of EA since its inception and look forward to contributing to its future success!

Load More