V

vin

49 karmaJoined Sep 2019

Posts
1

Sorted by New

Comments
8

Thanks for writing this! When I read the title I first thought the article would be about arguing that other cause areas are also important despite some people acting as if AI makes other causes unimportant. I'm glad I clicked on it anyway!

vin
7mo3
0
0
1

Thanks for doing this!

i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know?

Yes, everything gets cross-posted!

Answer by vinAug 20, 20232
1
1

My guess is

  • AI safety people would argue that if the world is destroyed then this improved happiness doesn't buy us much
  • Animal welfare people would argue that there are a lot more low-hanging fruit to improve animal's lives so that focusing on humans isn't the best we can do
  • global health/well-being people tend towards less speculative/less "weird" interventions (?)

I still think there are probably a lot of people who could get excited about the topic and it might be the right time to start pitching it to EAs.

(Also side note, maybe you're already aware of it but Sasha Chapin is basically researching enlightenment: https://sashachapin.substack.com/p/hey-why-arent-we-doing-more-research )

I really appreciate your transparency about how you allocate funding! Thank you for this post!

vin
1y14
1
0

Two worries:

  1. Doesn't applying to many funds at once end up taking more grantmaker time which there is already too little of?
  2. Doesn't this lead to some kind of funder bystander-ish effect? Ie. funders thinking "better not fund this, lots of other funders know about this, better fund the people who just applied to us and are less likely to get other funding counterfactually"

I really like the idea! Maybe that's just me, but I'd much prefer it if the content was visible without an account. (This actually stopped me from engaging with the content)

That's a good point, that the upset person in the conversation might be prone to be taken less seriously, even by themselves, especially if their reasons are hard-to-describe, but not necessarily wrong. 

Looking back at theses situations through this lens, I actually think at one point I didn't take myself seriously enough.

If my reasons are fuzzy, and I'm upset, it is tempting to conclude that I'm just being silly. A better framing is to view negative emotions as a kind of pointer, that says "Hey, in this topic there is still some unresolved issue. There may actually be a good reason why I have this emotion. Let's investigate where it comes from."

For the non-offended person, I think it already helps a lot to have the possibility in the back of you mind, that a topic may be emotional. For example, many people aren't aware that privacy is a topic that can be emotional for people.