Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.


New Comment
42 comments, sorted by Click to highlight new comments since: Today at 4:16 AM

Hi All,

I wanted to make a post about this but I just signed up so unfortunately do not have the reputation needed yet. So if anyone finds this worthy enough for a post, you are welcome to make one.

In short I think it would be beneficial for EA to get its own Stack Exchange and we can make this happen by casting a vote on the existing proposal:

The longer argument for this:

If you have done anything related to programming you are probably familiar with stack exchange. It is a community website where peers can answer each others questions, these answers and questions can be voted on. This format might sound very familiar to this forum and reddit (which also has a quality EA page) However I think it would be valuable for EA to get its own stack exchange page for the following reasons:

  • It has a format that makes it easy to ask short questions and preserves common general questions (this forum and reddit seem more suited for lengthy discussions)
  • Older questions are easily found on Stack exchange, also search engines list them quiet well
  • Stack exchange has a huge community, EA could get some free promotion by having its own stack exchange page
  • EA organisation sites have good FAQ's about general things for newcomers but imo there is nothing compared to crowd sourced FAQ for which stack exchange has the ideal format

I hope you found this convincing and we will soon give birth to the EA Stack Exchange page.

Would there be enough activity to justify an SE? Seems like an area where we might quickly run out of questions but want to spend a lot of time finding better answers to old questions, which I'm not sure fits the SE format.

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

Nice link! I think there's worthwhile research to be done here to get a more textured ITN.

On Impact—Here's a small example of x-risk (nuclear threat coming from inside the White House):

On Neglectedness—Thus far it seems highly neglected, at least at a system-level. is one of the only projects I know in the space (but the founder is not contributing much time to it)

On Tractability—I have no clue. Many of these "bottom up"/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).

  1. There's a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I'm tempted by pushing an EA mindset "outcome-izing/RCT-ing" the efforts in the space. So even if it doesn't score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
  2. This is highly related to the movement that Tristan Harris (who was at EAGlobal) is pushing.
  3. I feel like we need to differentiate between the "political-level" and the "community-level".
  4. I'm tempted to think about this from the "communities connect with communities" perspective. i.e The EA community is the "starting node/community" and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here
  5. Another version of this could be "scale the CFAR community".
  6. I think this could be related to Land Use Reform ( and how we construct empathetic communities with a variety of people. (Again, see Nicky Case —

Thanks for the Nicky Case links

I've been thinking about this as well lately, specifically in terms of reducing hatred and prejudice (racism, sexism, etc). For example, this is anecdotal, but one (black) man named Daryl Davis says that he has gotten more than 200 KKK members to disavow the group by simply approaching them and befriending them. Over time they would realize that their views were unfounded, and gave up their KKK membership of their own volition. This is an interview with Davis: and I think there is also a documentary about him.

This is a great Vox article about a study that discusses ways to reduce people's biases: The article title is about reducing racism, though the study discussed is about views on transgender people. It suggests that just a 10-min, open conversation can significantly reduce people's biases, and that these changes persist.

And lastly, another anecdotal story on how Derek Black, the godson of David Duke, and the son another very prominent figure in the alt-right, ended up leaving the alt-right after a group of diverse college classmates befriended him, and he slowly abandoned his previous views over the course of months.

While two of these links are to anecdotal stories, I think they are important in showing that even those with really extreme prejudice (KKK members and a young alt-right leader!) can let go of their prejudices when approached in the right way.

It definitely seems like an intervention that would require lots of grassroots, individual action, I suspect it could be very hard to measure the benefits of it - the amount of lives lost to this kind of prejudice and polarization is pretty low (at least in the US), and the other benefits that would arise are hard to measure. If someone else has good estimates on how impactful this would be, I'd love to hear them! Regardless I'm very excited to see some interventions in reducing prejudice and hatred that do seem to actually work, though more study into this is definitely necessary!

I don't think the recent rise of polarization in the US over the last decade is driven by a rise in racism or sexism. Activism to reduce either of them might be valuable, but I don't think it solves the issue of polarization.

I bet a more neglected aspect of polarization is the degree to which the left (which I identify with) literally hates the right for being bigots, or seeming bigots (agree with Christian Kleineidam below). This is literally the same mechanism of prejudice and hatred, with the same damaging polarization, but for different reasons.

There's much more energy to address the alt-right polarization than the not-even-radical left (many of my friends profess hatred of Trump voters qua Trump voters, it gives me the same pit of the stomach feeling when I see blatant racism). Hence, addressing the left is probably more neglected (unsure how you'd quantify this, but it seems pretty evident).

The trouble I find is that the left's prejudice and hatred seems more complex and harder to fix. In some ways, the bigots are easier to flip toward reason (anecdotes about befriending racists, families changing when their kids come out etc). Have you ever tried to demonstrate to a passionate liberal that maybe they've gone too far in writing off massive swaths of society as bigots? Just bringing it up literally challenges the friendship in my experience.

I think polarization is incredibly bad, there are neglected areas, but neglectedness seems to be outweighed by intractability.

Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum:

I have found it helpful in talking about donating large percentages of salary to be able to point out how many people do similar amounts of sacrifice. One comparison that has been made was with being vegetarian. But this is hard to compare and still only a few percent of people. More common is people taking a 10% pay cut for positive impact of their job, or donating 10% of their free time (which I am saying is roughly 40 hours per week if one has a full-time job (comments here)). I tried to get some rough estimates of the rates of these behaviors, but has anyone else done it more rigorously or would like to do it?

You could look at the forthcoming 2017 EA Survey data, or try looking at the past 2015 EA Survey and 2014 EA Survey.

Thanks, but I was referring to the rates of taking a lower salary (e.g. to nonprofit or government), etc in the general population. I am talking to people who are outside of EA at this point and not sure about committing to donating 10%.

One last one.

I'm writing more on my blog about my approach to intelligence augmentation

I'll be coding and thinking about how to judge it's impact this week (a lot of it depends on things like hard vs soft takeoff, possibilities of singletons and other crucial considerations). I'm also up for spending a few hours helping people with IA or autonomy based EA work, if anyone needs it?

I've written up my outline of the ITN argument for improving autonomy.

I'd like feedback please!

I've been reading Superforecasting and my take away is that to have good predictions at the world you need to have a multiplicity of view points and quantify and breakdown the estimates fermi-style.

So my question is, has there been any collective attempts at model building for prediction purposes? Try and get all the hedgehogs together with their big ideas and synthesize them to form a collective fox-y model?

I know there are prediction markets, but you don't know what information that a price has synthesized so it is hard to bet on them, if you only have a small bit of information and do not think you know better than the market as a whole.

It would seem that if we could share a pool of predictive power between us we could make better decisions about how to intervene in the world.

I think that the Good Judgment Project (founded by Philip Tetlock, the author of Superforecasting) is trying to build this with their experiments.

I'd not thought to look at it, I assumed it was/stayed an IARPA thing and so focused on world affairs. Thanks!

It looks like it has become a for-profit endeavour now with an open component.

From the looks of it there are no ways to submit questions and you can't see the models of the world used to make the predictions, so I'm not sure if charities (or people investing in charities) can gain much value from it.

We would want questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z.

I'm not sure how best to tackle this.

Great question. and are building decentralized prediction markets on the Ethereum blockchain. Their goal is to "match the global liquidity pool to the global knowledge pool."

I've asked them how they're thinking about hedgehogs to form a collective fox-y model (and then segmenting the data by hedgehog type).

But yeah, I think they will allow you to do what you want above: "Questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z."

I like both of them, but I'm wondering: why wait so long? Isn't there a way some group (maybe us) could build 10% of the kind of prediction market that gets us 90% of what we actually need? I need to think about this more, but waiting for Gnosis and Augur to mature seems risky. Unless de-risking that bet means joining both projects to accelerate their advent.

That's an interesting point about prediction markets. We individuals tend to invest in the stock market even when we know the market as a whole is wiser than us as individuals, because on the whole the market goes up, and anyways there are ways to track overall market performance. For prediction markets, I suppose there would need to be similar incentives somehow, otherwise every individual who doesn't have special information would be better off predicting what the overall market predicts, which doesn't help.

I'm guessing I just don't understand how prediction markets work. Hoping someone will correct me.

An idea i've had for a while: Making an Effective Altruism/DGB board game might might be an high impact project.

The reasons for why that would be are rough, but sensible i think.

1: Games can teach mindsets and viewpoints of the world that other media cannot, and since much of EA is counterintuitive, a game can be a great learning tool.

2: It can serve the same purpose as an documentary (aka: an EA awareness tool)

3: could be fun to whip out at EA hangouts and play with people new to EA ; related to 1st point.

4: Board games are having an golden age right now, with more people buying them then ever, and marketing/releasing a board game is radically cheaper then in the past, as far as i can tell.

what are some reasons not to pursue this project?


1: making a game takes long time, and...

2: Terrible career capital (as far as i can tell)

So unless you have much game design experience, or can persuade a fellow game designer to do it, it's very much not worth your time. 80 000 hours and CEA may be able to do something with this project, but otherwise im drawing a blank.

I have made a rough sketch of how a game like this would work, but it's not very good because i am not a game designer.


I have a fully-formed EA board game that I debuted at EA Global in San Francisco a couple weeks ago. EAs seem to really like it! You can see over one hundred of the game's cards here

The way the game works is that every player has a random private morality that they want to satisfy (e.g. preference utilitarianism, hedonism, sadism, nihilism) and all players also want to collaboratively achieve normative good (accumulating 1000 human QALYs, 10,000 animals QALYs, and 10 x-risk points). Players get QALYs and x-risk points by donating to charities and answering trivia questions.

The coolest part of the game is the reincarnation mechanic: every player has a randomly chosen income taken from the real-world global distribution of wealth. Players also unlock animal reincarnation mode after stumbling upon the bad giant pit of suffering (the modal outcome of unlocking animal reincarnation is to be stuck as a chicken until the pit of suffering is destroyed, or until a friendly human acquires a V(eg*n) card.)

I'm also thinking about turning the game into an app or computer game, but I'll probably need an experienced coder to help me with that.

Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people -- e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified 'Game of Life' based on 80k hours principles, etc.

The link doesn't work sadly, but it sounds cool!

Message me on facebook or my email (

I fixed the link.

Not sure if it's just me but the board_setup.jpg wouldn't load. I'm not sure why, so I'm not expecting a fix, just FYI. Cards look fun though!

I'm super into this! I'd be happy to check out your rough sketch. A couple thoughts:

  1. I think we should not bucket all of our time into a general time bucket. In fact, some of our time needs to be "fun creative working time". e.g. Sometimes I work on EA things, and sometimes I make music. "Designing an EA board game" could be part of that "fun bucket".
  2. A game like Pandemic ( could be a good starting point for designing the game (or to work with them on designing it). Essentially, use Pandemic as the MVP game for this, then expand to other cause areas (or to EA as a whole). Also, see 80,000 Hours most recent podcast on pandemics (the concept, not the oard game :)
  3. Here's my favorite piece on game design (by Magic the Gathering's head designer)
  4. My instinct is that this should be a collaborative game (or, as William Macaskgill would say, a "shared aims community").

There is a THINK module with an EA board game attached

Drawdown a book on possible climate change solutions seems EA relevant. It is interesting that it only allows peer reviewed data/models in it and systematically surveys all the solutions they could find.

I contacted the authors with some questions a few months back because their website included some apparently interesting info, but with inadequate explanation of how they had defined things, and it looked like the numbers didn't stack up (but I couldn't be sure because things weren't defined clearly enough)

They didn't reply.

i've also contacted them and they didn't reply. It's a bit unclear how they got to the rankings they did - there's not much explanation given.

Thanks, good to know, but a bit dispiriting.

I was also interested in this book - I've ordered a copy and I'm excited for it to arrive! The news that they haven't replied to questions about the data is disappointing but I think there is still value in the book. Particularly, on the "solutions" page on the site, they state: "The list is comprised primarily of “no regrets” solutions—actions that make sense to take regardless of their climate impact since they have intrinsic benefits to communities and economies."

Considering some of the solutions that actively make lives better (such as educating girls, or more effective farming practices) as well as reduce emissions could be a good way for EA to approach climate change. Considering these combined benefits could help us assess the effectiveness of interventions on multiple scales, such as QALY's saved as well as emissions reduced. This could make global warming solutions more attractive across various branches of EA, since many of the solutions overlap with other cause areas, and considering the benefits to both causes might lead us to realize that some interventions are more effective than we previously thought.


I would love some feedback on what I'm calling an "EA Idea Sounding Board"

I'm thinking of a call-in show and/or a message board, where EA's suggest ideas to someone with experience in the EA landscape, perhaps an advisor at 80,000 Hours. It might go something like this:

An 80,000 Hours advisor takes calls from EA's who essentially pitch their ideas for anything EA related: an idea for a donation drive, for a new cause area, for a startup. The advisor hears out the idea and reframes and refines it to show both how it is promising and in what ways it seems to miss the mark. At the end, they work an assessment and plan. The overall assessment might fall into categories, like: -"Back to the Drawing Board" (Rethink/research these major limitations: ) -"Worth Pursuing" (You're onto something here, develop these parts: _) -"Ready for Prime Time" (Well researched/planned, concrete next steps: _)

These calls could be recorded with the option to be edited and distributed as a podcast episode. I see lots of potential value:

  • Showcasing good ideas
  • Demonstrate rationality in progress
  • Highlight limitations of cognitive biases
  • Demonstrate the general principles/values of EA
  • Offer updates on latest EA topics
  • Enhance networking.

Not every call would be broadcast, only the very best ones. Part of the incentive for developing a good idea is to see if you can generate an interesting enough discussion to get published. However, maybe the best part is that even weak ideas might be very valuable to publish because they would demonstrate loose thinking and be good examples of constructive criticism.

Alternatively, this could be distributed on a message board or forum of some kind. Perhaps after the discussion with the 80,000 Hours advisor, the person pitching the idea would write up a summary of the dialog, highlighting the original idea, the general principles, the cognitive biases or weak elements, the strengths of the idea, and the final assessment. This summary could be posted for the community to review. To streamline this, a template could be created ahead of time which forms the basis of the idea discussion. After the discussion, the template is edited to reflect the content of the discussion and made ready for posting. Certain tags could be added, such as a request for someone to weigh in if they know of research in this area, or someone to fund the idea. The 80,000 Hours advisor could affix an overall rating of how promising they think this idea is and what needs to be done to make it more promising.


Rough summary % of US population

Giving 10%+: Time: 7%; Pay cut: 20%; Donate: 6%

Giving 20%+: Time: 2%; Pay cut: 10%; Donate: 0.6%

This is based on 40 hours per week free time. The take away (that I believe is robust despite uncertainty explained below) is that people are much more willing to take a big pay cut than to donate a similar percent of money. So if we could get people over the psychological barrier, we might be able to convince 20% of people to be EAs.

Furthermore, at least in the US, donating 10% of your pretax (adjusted gross income) is a smaller economic hit than taking a 10% pay cut because not all income is taxed. This would mean even more than 20% of people take a pay cut that is equivalent to donating 10%.

Volunteering source: bins are not perfect.

Pay cut source, government employment source, nonprofit employment source. Of course there can be other differences in employment like job security, benefits, and hours. However, this is not accounting for people who choose a lower paying field for the impact, so it gives some idea.

Giving source: percentages are for religious giving, which is ~half of giving in US, so I doubled the percentages to get the people giving that amount to any charity: rough.

I'm interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a "root cause incentive" perspective, from a "breaking game theory" perspective, and from a "change how money works" perspective. I'll have a more full post about this soon, but here's some of my initial thoughts on the subject:


I'd be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It's clearly not neglected!)

Impact seems solid, it is relatively neglected (at least with regards to charities). I think tractability is where blockchains might fall down. It seems easy to do an ICO, but less easy to get people to get people interested in being part of the incentive system.

In order to estimate the tractability, I would back up a level. What blockchain (at least how you are using the phrase) is about is alternate incentive mechanisms to fiat money. There have been lots of alternate currencies in the past. There have probably been economic work on these things and they might be able to give you an outside view on how likely it is that a currency will gain traction in general. You can also look for research to see if there have been any charity based currencies, and how successful they have been.

You can then update that estimate with the improvements that crypto brings (smart contracts/distributed ledgers). But also the risks, having to hard fork the currency due to smart contract errors and having to make sure the people you want to participate in the system have enough crypto/computer security knowledge to treat their wallets safely (which may or may not mean using exchanges). I think the risks (currently) make me think it is intractable, but that might just mean we should look for tractable ways of mitigating those risks.

I'm interested in reading your analysis!

I've started blogging regularly. Today I asked whether the push for AI safety needs more of a public movement behind it in light of the letter on AI weapons from Elon Musk and others. Read it and let me know what your thoughts are, as I may act on your answers!

What do you think about reducing transport cost by increasing passenger density as a cause area? Reducing transport cost seems to be very important for the economy, to facilitate business, migration, tourism, etc.. For example, international tourism is a significant industry of many poor countries. ( Although increasing passenger density may make flights less safe, by an economy class syndrome or slower evacuation, etc., I think lower flight cost could actually reduce transport death as air is the safest way to travel by micromort per passenger-kilometer, and many people choose more dangerous methods of travel (car, bus, train, ship) because of the cost ( Of course, the payload of any airplane is limited, so there may be limitations on the density of passengers. To reduce overall density, it is possible to sell less dense seats as well. (e.g. standing seat, supereconomy, economy, premium economy, partial recline business, full recline business, first, etc..) Also, to reduce payload, it is possible to not to allow check-in baggage and even making a cabin in the lower deck. Although my focus here is airplanes, this could be applied to bus, train, ship, etc. as well.

I would like to suggest several ways to increase passenger density: 1) Reducing seat pitch: It is possible to reduce seat pitch to as short as 28 inches. ( 2) Standing seat: Although this is similar to 1) it is much more radical. It is possible to reduce seat pitch even more by adopting standing seats, although regulations might not allow this. 3) Reducing seat width: It is possible to reduce seat width to as narrow as 16 inches. (ibid.) It is possible to make 13 seats abreast configuration on A380 main deck. (248-inch width of the main deck (, 16 inches per seat, 20 inches per aisle.) 4) Reducing the number of the aisle of wide-body aircraft to one: This is not feasible right now due to regulations such as 14 CFR 25.817 (, which limits the number of seats abreast to 6 in single-aisle aircraft. 5) Lower deck cabin: Lower decks (cargo deck) of airplanes are almost as big as the passenger deck or in case of A380, almost as big as the upper deck. Some planes have crew rest or lavatory at lower deck. (see, e.g.,

(re-post from EA Entrepreneurship FB group)

I've so far not seen it, but is anyone taking a broader axiological approach to AI alignment rather than a decision theory specific approach? Obviously the decision theory approach is a more bounded problem that is likely easier to solve since it is a special case of dealing with processes we can apply decision theory to, but I wonder if we might not gain insights and better intuitions from studying more general cases.