Clara Torres Latorre 🔸

Postdoc @ CSIC
343 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
114

I think the devil is in the details. The principles are fine but what really matters is the operationalization. Hard to tell without more information about the program.

How so?

As I read it this post compares to donations to Givewell, not to donations to AI safety research.

I think there is a huge difference between:

  • Being hired by an EA (TM) org
  • Doing something counterfactually impactful

If you were hired by an EA org as paid staff, you only get credit for the % that you are better than the next possible hire.

On the other hand, if you have a normal job and donate, all the donations are counterfactual.

Similarly, if you do unpaid work the bar is much lower, and would be something akin to "are the coordination costs worth it?".

BTW, Doing earn to give is still valuable(if you can donate over $50,000 a year).

This is still an extremely high bar.

Hi,

I care instrumentally, because it may impact negative on the welfare of other people.

And yes, I agree that this is totally compatible with rejecting egalitarianism and prioritarianism, but it's not so obvious.

I was trying to illustrate why I think many people endorse some sort of egalitarianism and have thoughts like "inequality bad", which are easy to confuse with "inequality intrinsically bad".

I have another intuition for egalitarianism: the distribution of power.

Most resources in our world can be traded for influence/power, such as money, time and materials.

Therefore, in the real scenarios that guide our intuitions, inequality is associated with concentration of power.

To put it in a charicature example: I don't care if TechnoBro 3000 celebrates his birthday in the asteroid belt with his 10^30 gold plated robot friends, but I do care if he can buy the elections of Democratistan.

This is not a rebuttal of the narrow definition of egalitarianism, but is close enough to work as an intuition pump if we are not being very theoretical.

Maybe because P(doom) ranges from 10% to 99% excludes many people that state a lower P(doom) or refuse to state a number.

Maybe because Will MacAskill sits at 10–20%, calling himself "optimistic today" — but notes this is among the lowest estimates in serious circles implies people that state a lower number are not serious.

Those were my reasons to think of downvoting. In the end I didn't do it because at the time the post was already in the negatives.

There is a huge selection bias coming into play here, where people that appear in AI safety podcasts or use the expression P(doom) have self-selected for higher numbers than people that don't, and this is not addressed in the post.

Questions:

How does your estimate of $800 per DALY compare to other more established interventions such as insecticide treated bednets and vitamin A supplementation?

Do you have any cost effectiveness numbers for screening?

Oh, I understand.

Unfortunately my German is not that good and I'm worried that an AI translation would cause some of the problems that I said earlier.

Yes, I meant in terms of style, but also in the sense that AI adds filler and obfuscates how ideas relate to each other.

Can you link the paper here?

Load more