Recent Discussion

Quite wonderfully, there has been a proliferation of research questions EAs have identified as potentially worth pursuing, and now even a proliferation of collections of such questions. So like a good little EA, I’ve gone meta: this post is a collection of all such collections I’m aware of. I hope this can serve as a central directory to all of those other useful resources, and thereby help interested EAs find questions they can investigate to help inform our whole community’s efforts to do good better.

Some things to note:

  • It may be best to just engage with one set of questions that are relevant to your skills, interests, or plans, and ignore the rest of this post.
  • It’s possible that some of these questions are no longer “open”.
  • I’ve included some
1Vael Gates9hGot my post up :). [] Also "Artificial Intelligence and Global Security Initiative Research Agenda [] - Centre for a New American Security, no date" was published in July 2017, according to the embedded pdf in that link!

Thanks for the heads up - I've now added a link to your doc and changed the date for the CNAS agenda :)

tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.

I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.

This post is about the worldview called longtermism. It does not describe a position on...

Good points, thanks :) I agree with everything here.

One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier. 

However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (... (read more)

2MichaelStJules4hUnder the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options. 1. If you can bring a good life into existence or none, it is at least permissible to choose none, and under basically any asymmetry that doesn't lead to principled antinatalism (basically all but perfect lives are bad), it's permissible to choose either. 2. If you can bring a good life into existence or none, it is at least permissible to choose none, and under a non-antinatalist asymmetry, it's permissible to choose either. 3. If you can bring a good life into existence, a flourishing life into existence or none, it is at least permissible to choose none, and under a wide view of the asymmetry (basically to solve the nonidentity problem), it is not permissible to bring the merely good life into existence. Under a non-antinatalist asymmetry (which can be wide or narrow), it is permissible to bring the flourishing life into existence. Under a narrow (not wide) non-antinatalist asymmetry, all three options are permissible. If you accept transitivity and the independence of irrelevant alternatives, instead of having the flourishing life better than none, you could have a principled antinatalism: meh life < good life < flourishing life ≤ none, although this doesn't follow.
2MichaelStJules5hI think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity). I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a person's existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so. Sorry, I should have been more explicit. You wrote "In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled []", but we can also have values that would go frustrated for a very long time too if we don't go extinct, and including even in a future that looks mostly utopian. I also think it's likely the future will contain misery. That's fair. From the paper: It is worth noting that this still doesn't tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian ("a future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier th

tldr: I'm looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I've collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at!


Broader Vision

I'm a social scientist, and I want to contribute to reducing long-term risks from AI. I'm excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science. 

To that end, I'm hoping to:

  1. collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety
  2. work with undergraduate students / collaborators on these

I am of the belief that counterfactuals are socially constructed to an extent and so it might be useful for someone from a social science background to investigate this - at least if you think there's value in MIRI's research agenda.

Dear juggler, I saw you grabbing one ball and throwing it up in the air. It seemed easy, you knew how and when it would come back to your hand. You learnt how to deal with that one ball so many centuries ago. What do you call your ball? Food? Shelter? 

But you are not comfortable in your comfort zone, aren't you? Once you mastered that ball, it was time for a second one; you have at least two hands, and coordination. Agriculture? That sounded doable, it just required a little bit of focus, but it could be done. And so your game began, faster, and faster every second. Then a third one: life expectancy. Then a fourth one: peace. Five: institutions. Six: income. Seven: wellbeing. And you...

In many ways, at the moment cryptocurrencies are mostly used for speculation and maybe that's the reason that it doesn't get much attention  within the EA community (apart from being relatively more accepted as as source of donations compared to other communities).
I think that might be an oversight.

So far I would say cryptocurrency was mainly transformative in three ways:

1) Allowing (pseudo-)anonymous transactions world-wide, which allows for more safety when conducting transactions in a context that could lead to persecution (buying high-quality drugs online seems to be the most widespread one)
2) Smart individuals (or lucky early-adopters) can become, or have become quite rich by utilizing the enormous growth of the crypto market which certainly is transformative for the individuals in question
3) It has created a new eco-system in...

This post raises good points. I think crypto is a very neglected cause area with enormous upside potential, especially for developing countries. There's much, much more to the crypto industry than just Bitcoin as a 'store of value', or crypto trading as a way to make money.

There are tens of thousands of smart people working on blockchain technologies and protocols that could offer a huge range of EA-adjacent use cases, such as:

  • much faster, cheaper remittances 
  • protect savings against hyperinflation by irresponsible central banks
  • secure economic identity
... (read more)


Authors: Dan Stein (Co-Founder at Giving Green, Chief Economist at IDinsight), Kim Huynh (Climate Scientist at Giving Green)

Editors: Emily Thai (Manager at Giving Green)


Climate change activism focused on US federal policy can potentially reduce levels of greenhouse gases (GHGs) in the atmosphere by impacting the likelihood of climate bills passing in the House and Senate, or by affecting executive or regulatory policy. We developed a simple cost-effectiveness analysis (CEA) model that assesses activism’s contribution to GHG emissions. In this model, we focused on activism’s potential impact on two types of bills: a bipartisan bill and a progressive-influenced bill passed along party lines. After testing various scenarios in our CEA (e.g., Very Pessimistic to Optimistic), we found that donating to climate change activist groups could be highly cost-effective...

Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

I am an EA from Pōneke/Wellington, Aotearoa/New Zealand. I know that some of my EA friends here struggle to motivate themselves enough. The bloke from replacing guilt says this isn't a big deal, but I haven't gotten far enough through to let go of my intuition yet that you just need better guilt. Today, I want to offer you a way to supercharge the guilt you feel about even the smallest behaviours in your life.

The idea behind the Playpumps Productivity System is that even your most micro level struggles can become issues of life or death. Users purchase milli-lives, representing one thousandth of the high end of the expected cost to save a life via the Givewell Maximum Impact Fund. If the user succeeds in their goal,...

At least for me, I think daily accountability—and having to write a reflection if you fail to meet your goals—is a lot greater of an incentive than the threat of donating to PlayPumps a few months down the line.

How has ethics evolved over time? What does it mean to be an ethical person in the modern landscape?

Effective Altruists of Berkeley are honored to host Professor Dacher Keltner of UC Berkeley’s Greater Good Science Center to discuss the evolution of ethics and the work the Greater Good Science Center is performing for the greater good.

RSVP Here:

Snacks provided. Located at Social Sciences Building: Room 56[%7B%22surface%22%3A%22page%22%7D]%7D

In How Asia Works (2014), Joe Studwell distills his research into the economics of nine countries—Japan, South Korea, Taiwan, Indonesia, Malaysia, Thailand, the Philippines, Vietnam, and China—to understand what led to an economic boom in some countries, while others were unable to achieve the same results.

Make sure to join the #reading-group channel on the EA NYC Slack to discuss throughout the month. Stick around since that's where we decide on the books for future months!
Join the slack at

If you're new to effective altruism, you're more than welcome to join us!
Here are a couple of great introductions:

See also EA NYC's code of conduct - we are committed to the safety and well-being of our members: