Isaac Dunn

248Joined Jul 2020

Comments
33

Sounds excellent! Roughly how large is large?

Thanks for the reply!

If I understand correctly, you think that people in EA do care about the sign of their impact, but that in practice their actions don't align with this and they might end up having a large impact of unknown sign?

That's certainly a reasonable view to hold, but given that you seem to agree that people are trying to have a positive impact, I don't see how using phrases like "expected value" or "positive impact" instead of just "impact" would help.

In your example, it seems that SBF is talking about quickly making grants that have positive expected value, and uses the phrase "expected value" three times.

I think when people talk about impact, it's implicit that they mean positive impact. I haven't seen anything that makes me think that someone in EA doesn't care about the sign of their impact, although I'd certainly be interested in any evidence of that.

When someone learns about effective altruism, they might realise how large a difference they can make. They might also realise how much greater a difference a more diligent/thoughtful/selfless/smart/skilled version of themselves could make, and they might start to feel guilty about not doing more or being better.

Does Kristin have any advice for people that are new to effective altruism about how best to reduce these feelings? (Or advice for the way that we communicate about effective altruism that might prevent these problems?)

Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn't update my beliefs much, and I should ask for their reasons. Ideally, they'd have compelling reasons for their beliefs.

That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.

I agree we should be skeptical! (Although I am open to believing such events are possible if there seem to be good reasons to think so.)

But while the intractability stuff is kind of interesting, I don't think it actually says much about how skeptical we should be of different claims in practice.

I agree that we should be especially careful not to fool ourselves that we have worked out a way to positively affect the future. But I'm overall not convinced by this argument. (Thanks for writing it, though!)

I can't quite crisply say why I'm not convinced. But as a start, why is this argument restricted just to longtermist EA? Wouldn't these problems, if they exist, also make it intractable to say whether (for example) the outcome intended by a nearterm focused intervention has positive probability? The argument seems to prove too much.

Thanks for explaining, I hadn't realised that, and it makes it much more attractive to follow your advice!

You mention that reliance on Google is bad - I'd be interested to hear more about why you think that's true. (I agree that EA relies on Google services a lot.)

It seems that if we can trust Google, then the in-transit encryption that Gmail provides is good enough.

It seems that it is not possible to do what you're suggesting using (say) Gmail's web interface or phone app. I expect that having to give up on the features provided by these would be a noticeable ongoing cost for me - for example, I expect Thunderbird to do much worse at automatically categorising my incoming mail. Does that seem right?

Also, the specific upsides you mentioned don't seem that compelling to me, and the general "you don't know why it might be useful but it might" applies to too many things to be worth the costs.

Load More