Clara Torres Latorre 🔸

Postdoc @ CSIC
310 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
88

Hey, I like your progressive pledge tool. How hard would it be to include places outside the US? And more currencies?

I sometimes check this place out for cost of living comparisons around the world, it's not perfect but it gives you some idea for at least big cities:
https://www.numbeo.com/cost-of-living/
 
At the same time, the good thing of 10% is that it is a way stronger Schelling point than a progressive tax, so I suppose it's better for signaling.

For me it's even more that what you say. I was thinking even for most people working on AI or bio risk, the threats usually feel quite real in a scale of decades, and they could be personally affected. The numbers may change, but I think for most people working in EA cause areas, their work is well justified without appealing to impartiality (radical empathy would be enough, and it's less demanding) or longtermism.

Strongly agree.

For me, the discussion of impartiality (first day of intro program) and longtermism (which isn't necessary for many of the suggested action points) were moments of doubt. Also 80k narrowing on transformative AI and alienating people that don't agree with the worldview.

Somehow I still stuck around.

But I think many of the things EA proposes don't need people to buy the whole package, and we are missing out on impact by leading with strong philosophical stuff.

Non american here.

I read that sentence as a rethorical like "doing whatever thing is necessary" and I don't see it implying that "defending America" is necessarily even good.

However, if your read is the right one, then I find it off putting as well.

I would appreciate @Mjreard clarifying what the intent behind that was.

At least the 80k pivot to narrow focus on AI seems to back this point.

Talking to an LLM is extremely sensitive to how you frame things and your conversation history + config files.

Not clear that what worked for you would work in general.

Yes. I'm one of those possible people. I'm happy to have reached mutual understanding.

Okay. Thank you for your patience. I understand your point, and agree with the formal argument.

However, I still disagree. I don't know how to explain why without using some maths.

Let A be a subset of B, both sets of actions. Let G be the set of actions that we ought to do.

Existential generalization is something like

If exists x in A ^ G, exists x in B ^ G.

But this is not how I would expect readers to understand "we ought to build more confined animal feeding operations" in your abstract. This reads like a general recommendation, or even an unqualified/universal statement, not like an existential.

And let me add: even if the formal argument is airtight in your examples, it doesn't sound as obvious (in my intuition, it sounds obviously wrong) in your original case. This suggests that the same words mean different things in the different contexts, at least in how I'm reading it.

Thank you for spelling out your reasoning in such a transparent way. I think our disagreement is not a matter of stylistic preferences.

I believe the following is incorrect:

If [we should build more CAFOs of the kind in which animals have above 0 welfare], then [we should build more CAFOs].

Let me rephrase your argument as

If [CAFOs > 0 is should] then [CAFOs is should].

I believe for this to hold you would need to know that [CAFOs < 0] is impossible, not just that [CAFOs > 0] is possible.

Hi Vera,

I agree on the meta point that you make here in principle. I think it's fine to not state every premise in the abstract and the conclusion, if it's something that it's argued for.

I also agree that "net positive welfare is possible in CAFOs" is not an assumption, but a premise that is argued for (and I find the arguments sound).

However, I still think the abstract as it stands now is saying something different, namely, that [maximizing aggregate welfare] => [we should build more CAFOs]. 

Afaik, this would be the logical conclusion from aggregationism if we assume that [animals in CAFOs have net positive lives], not only if [it is possible that animals in CAFOs have net positive lives].

Load more