Dario Citrini

Pursuing an undergraduate degree
291Joined Jul 2021



Hey everyone, excited to be part of this! :) I was born, have grown up, currently live and also study in Zurich, Switzerland. I've just completed my fourth semester of studying political science (major) and philosophy (minor) at the University of Zurich. I'm honoured to have recently become a member of the Swiss Study Foundation. I currently plan to pursue a master's in PPE (philosophy, politics, economics), political science, philosophy, or future studies and am generally interested in interdisciplinary research on “the big questions”, especially regarding suffering, the future, and uncertainty.

The plight of badly-off humans and other animals has been a constant emotional involvement and intellectual interest of mine for many years, but only comparably recently have I embarked on my path to study how I could apply my compassion more systematically in a way that helps me to be more effective in making this world a better place. I've been fascinated by the philosophy of and social movement around EA since early 2020 and am excited about the profound impact EA has had on my life ever since.

I occasionally post stuff I find interesting and important on facebook and have a profile on the EA Hub and a blog.

Hundreds of sometimes broad, sometimes more specific intellectual interests have accumulated over the last few years. Among the issues I have been most excited about in the last few years (having become varyingly hardly to somewhat or maybe moderately versed) and/or would most like to investigate in the next few years are the following, grouped into three broad clusters:

the philosophy, politics, economics, and science of (emerging) technologies and the long-term future of Earth-originating life:

– s-risks and x-risks
– governance incl. ethics of artificial intelligence (AI), AI value alignment, and technical AI safety
– game theory of cooperation and conflict in the context of AI
– non-human sentience & sapience and moral circle expansion
– governance incl. ethics of biotechnologies, esp. transhumanism
– governance incl. ethics of outer space, esp. space colonisation
– cluelessness about and forecasting the long-term future
– the intersection of science fiction, technology, natural and social sciences, and philosophy
– futurology, progress studies, and macrostrategy
– longtermism(s) in theory and practice

the philosophy, politics, economics, and science of belief formation, identity, and decision-making:

– decision theory and game theory
– decision-theoretic fanaticism, risk aversion, and bounded rationality
– formal epistemology and Bayesianism
– social epistemology, communication, and cognitive biases
– institutional decision-making, international relations, and global governance
– incentive structures, collective action problems, and complexity science
– egoism & altruism and dark tetrad traits, esp. re leadership
– the intersection of evolutionary psychology, moral psychology, and moral epistemology
– moral agency and moral patiency in humans and non-humans
– philosophy and psychology of self and human nature

(more) topics in moral philosophy:

– ethical issues in effective altruism and global priorities research
– moral uncertainty
– value theory
– suffering-focused ethics
– population ethics
– ethics of the future
– risk ethics
– consequentialist alternatives to utilitarianism
– scope-sensitive alternatives to consequentialism
– metaethics

How others can help me

[to do]

How I can help others

[to do]


Wow, this provided me with a lot of food for thought. For me personally, this was definitely one of the most simultaneously intriguing and relatable things I've ever read on the EA forum.

So many of these topics seem really interesting to me personally that your saying "It was designed primarily for economics graduate students considering careers in global priorities research." made me wonder: Is there something similar for people with a less robust background in economics? Maybe for economics (or political science) undergraduate students? :)

One major difference is between people who care about multiple things and people who have decided that their life’s work should be world improvement.

Does the first rule out the second? And does the second rule out the first?

Oh wow, thank you for this elaborate response!
FWIW, I don't think nr 2 is a big negative, if it's a net negative at all.

Does that answer your question, does it raise more?

Yes, it answers my question. And no, it didn't raise more, at least for the moment.

This sounds super cool! Reading the full post, I got the (maybe unqualified) impression that a lot of thought went into making this robust and making it work well.

Not only has CEEALAR’s hotel successfully been running for almost four years by now. In addition, they were facing significant hurdles we don’t expect to impinge on our own project: [...]

Reading this makes me optimistic about both the future of CEEALAR's hotel and also the Berlin hub. But I also wonder whether there're factors / hurdles that CEEALAR hasn't faced that you expect the Berlin hub might face.. What do you think?

I'm super excited about this! This newsletter is very valuable for me: I often find myself saving a link or two for later (that I might perhaps get around to someday) when reading an EA forum post, but here, there were five resources you linked to that I've just scheduled time to read / check out.

Also, I found the interview really interesting – fanaticism in decision theory and ethics is one of my key uncertainties re "putting ethics into practice, knowing about longtermism" and global priorities research. To make things even more.. frightening(?), I'm not sure how much taking moral uncertainty into account could help against fanaticism.. Fanaticism is certainly a thorny issue, so I'm glad there seems to be increasingly much research being done on that front.

One more point of feedback: In a comment on the March 2022 edition, someone mentioned they think it's too long for a newsletter. I personally think otherwise, so consider this one vote *against* trying to make the newsletter shorter. :)


I am very new to AI governance and this post helped me a lot in getting a better sense of "what's out there", thank you! Now, what I'm about to say isn't meant so much as "I felt this was lacking in your post" but more as simply "reading this made me wonder about something": What about AI governance focused on s-risks instead of only/mostly x-risks? The London-based Center on Long-Term Risk (CLR) conducts pertinent work on the foundational end of the spectrum (see their priority areas). Which other organisations are (at least partly) working on AI governance focused on s-risks?

I really like this list and think it will be helpful for me!
Do you have thoughts on the relative importance of these various heuristics? Maybe something like a heuristic for which heuristics are most important for one's situation?

Also, you wrote:

Scale, number helped - do something that impacts many people positively
Scale, degree helped - do something that impacts people to a great positive degree

I'd like to point out that "people" doesn't quite capture who EA is trying to help (considering that we strive to do what's impartially good, we arguably ought to reject speciesism and substratism, thus also taking into consideration minds that are non-human and/or digital and/or "???").
I'm not sure what's the best term to use (also depends on the situation in which you use one of the terms), but "sentient beings" / "sentient minds" / "moral patients" seem like terms that  better capture what EA as a community is concerned with.

I really liked your clear outline on your position, and this definitely contained some food for thought that I found to be nicely presented. That being said, I am still much more agnostic re which position to take (esp. after reading some of the comments here) than you seem to be. You wrote:

Third, degrowthers argue that technological innovations do not allow for a sufficient decoupling between GDP and environmental impacts. But they neglect that a decoupling between economic wealth (GDP) and well-being is less realistic.

Maybe this is misguided, but why not attempt to pursue both in a twin strategy?
What if both decouplings are insufficient on their own but sufficient when combined?
This also ties in to a concept I've come across recently: agrowth.
Quoted from The new theory of economic 'agrowth' contributes to the viability of climate policies:

"One can be concerned or critical about economic growth without resorting to an anti-growth position," states the author [Jeroen van den Bergh]. He goes on to highlight that an "agrowth" strategy will allow us to scan a wider space for policies that improve welfare and environmental conditions. Policy selection will not be constrained by the goal of economic growth. "One does not need to assume that unemployment, inequity and environmental challenges are solved by unconditional pro- or zero/negative growth. Social and environmental policies sometimes restrain and at other times stimulate growth, depending on contextual factors. An "agrowth" strategy is precautionary as it makes society less sensitive to potential scenarios in which climate policy constrains economic growth. Hence, it will reduce resistance to such policy," he indicates.

In a practical sense, van den Bergh states that it is necessary to combat the social belief -- widespread among policy circles and politics -- that growth has to be prioritized, and stresses the need for a debate in politics and wider society about stepping outside the futile framing of pro- versus anti-growth. "Realizing there is a third way can help to overcome current polarization and weaken political resistance against a serious climate policy."

Load More