Hey, I like your progressive pledge tool. How hard would it be to include places outside the US? And more currencies?
I sometimes check this place out for cost of living comparisons around the world, it's not perfect but it gives you some idea for at least big cities:
https://www.numbeo.com/cost-of-living/
At the same time, the good thing of 10% is that it is a way stronger Schelling point than a progressive tax, so I suppose it's better for signaling.
For me it's even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and it's less demanding) or longtermism.
Strongly agree.
For me, the discussion of impartiality (first day of intro program) and longtermism (which isn't necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that don't agree with the worldview.
Somehow I still stuck around.
But I think many of the things EA proposes don't need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.
Non american here.
I read that sentence as a rethorical like "doing whatever thing is necessary" and I don't see it implying that "defending America" is necessarily even good.
However, if your read is the right one, then I find it off putting as well.
I would appreciate @Mjreard clarifying what the intent behind that was.
Okay. Thank you for your patience. I understand your point, and agree with the formal argument.
However, I still disagree. I don't know how to explain why without using some maths.
Let A be a subset of B, both sets of actions. Let G be the set of actions that we ought to do.
Existential generalization is something like
If exists x in A ^ G, exists x in B ^ G.
But this is not how I would expect readers to understand "we ought to build more confined animal feeding operations" in your abstract. This reads like a general recommendation, or even an unqualified/universal statement, not like an existential.
And let me add: even if the formal argument is airtight in your examples, it doesn't sound as obvious (in my intuition, it sounds obviously wrong) in your original case. This suggests that the same words mean different things in the different contexts, at least in how I'm reading it.
Hi, welcome to the EA Forum. It's nice to see philosophical ideas that don't come from the dominant tradition here.
Your argument rests on the premise that everyone (human) has liangzhi but large models don't.
I'm skeptical of that, because the innate sense of right/wrong can be culture dependent, and there are people with neurological and psychological conditions that don't have that same experience.
How does that fit into your worldview?