Comments

Buck's Shortform

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.

----

I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.

Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun--you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions. 

----

I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.

I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.

----

I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.

If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.

Where are you donating in 2020 and why?

I'd be curious to hear why you think that these charities are excellent; eg I'd be curious for your reply to the arguments here.

Thoughts on whether we're living at the most influential time in history

Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.

Thoughts on whether we're living at the most influential time in history

How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong? 

This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it).  It seems like your outside view model assigns approximately zero probability to HoH, and so if now is the HoH, it's probably because we shouldn't be using your model, rather than because we're in the tiny proportion of worlds in your model where now is HoH.

I think this distinction is important because it seems to me that the probability of HoH give your beliefs should be almost entirely determined by the prior and HoH-likelihood of models other than the one you proposed--if your central model is the outside-view model you proposed, and you're 80% confident in that, then I suspect that the majority of your credence on HoH should come from the other 20% of your prior, and so the question of how much your outside-view-model updates based on evidence doesn't seem likely to be very important.

Thoughts on whether we're living at the most influential time in history

Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)

Thoughts on whether we're living at the most influential time in history

Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.

Thoughts on whether we're living at the most influential time in history

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.

 

This does make a lot more sense than what you wrote in your post. 

Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.

Thoughts on whether we're living at the most influential time in history

The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.

Load More