All of atucker's Comments + Replies

1.1) There's some weak wisdom of nature prior that blasting one of your neurotransmitter pathways for a short period is unlikely to be helpful.

I think that the wisdom of nature prior would say that we shouldn't expect blasting a neurotransmitter pathway to be evolutionarily adaptive on average. If we know why something wouldn't be adaptive, then it seems like it doesn't apply. This prior would argue against claims like "X increases human capital", but not claims like "X increases altruism", since there's a clear... (read more)

I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about.

In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particular... (read more)

But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising s... (read more)

1
kbog
7y
Okay, so there's some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we're trying to find the right point, so we can err on either side if we're not careful. I agree that dishonesty is very bad, but I'm just a bit worried that if we all start treating errors on one side like a large controversy then we're going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side. To be perfectly blunt and honest, it's a blog post with some anecdotes. That's fine for saying that there's a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don't have an idea of how these people's motivations changed (maybe they'd have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get). Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be.

I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do ... (read more)

1
kbog
7y
True. But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists. Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it's actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

I think that people shouldn't donate at least 10% of their income if they think that doing so interferes with the best way for them to do good, but I don't think that the current pledge or FAQ supports breaking it for that reason.

Coming to the conclusion that donating >=10% of one's income is not the best way to do good does not seem like a normal interpretation of "serious unforeseen circumstances".

A version of the pledge that I would be more interested in would be one that's largely the same, but has a clause to the effect that I can stop donating if I stop thinking that it's the best way to do good, and have engaged with people in good faith in coming to that decision.

2
Benjamin_Todd
7y
I'm sympathetic to this, and didn't fulfill the pledge for several years early in CEA when we paid ourselves very little (initially only £15k pa!). However, I'm now fulfilling it and intend to make up the years when I didn't.

Something that surprised me from the Superforecasting book is that just having a registry helps, even when those predictions aren't part of a prediction market.

Maybe a prediction market is overkill right now? I think that registering predictions could be valuable even without the critical mass necessary for the market to have much liquidity. It seems that the advantage of prediction markets is in incentivizing people to try to participate and do well, but if we're just trying to track predictions that EAs are already trying to make then that might be enoug... (read more)

I really liked Larks' comment, but I'd like to add that this also incentivizes research teams to go into secret. Many AI projects (and some biotech) are currently privately funded rather than government funded, and so they could profit by not publicizing their efforts.

1
Owen Cotton-Barratt
10y
This is true, although I think the number of researchers who would be happy to work on something illegally would be quite a lot lower than those happy to work on something legally. A similar effect I'm more worried about is pushing the research over to less safety-conscious regimes. But I'm not certain about the size of this effect; good regulation in one country is often copied, and this is an area where international agreements might be possible (and international law might provide some support, although it is untested: see pages 113-122 of this report in a geoengineering context).

My other point was that EA isn't new, but that we don't recognize earlier attempts because they're not really doing evidence in a way that we would recognize.

I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren't really on the table yet.

I agree with your points about there being disagreement about EA, but I don't think that they fully explain why people didn't come up with it earlier.

I think that there are two things going on here -- one is that the idea of thinking critically about how to improve other people's lives without much consideration of who they are or where they live and then doing the result of that thinking isn't actually new, and the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of eff... (read more)

2
Katja_Grace
10y
The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn't seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards - e.g. only being willing to trust certain people's reports of how interventions were going. So I'm not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.