Clara Torres Latorre 🔸

Postdoc @ CSIC
145 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
41

I like your post, especially the vibe of it.

At the same time, I have a hard time understanding what does "quit EA" even mean:

Stop saying you're EA? I guess that's fine.

Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.

99% karma-weighted of tagged posts about AI seems wrong

if you check the top 4 posts of all time, the 1st and 3rd are about FTX, the 2nd about earning to give and the 4th about health, totalling > 2k karma

might want to check for bugs

I started, and then realised how complicated is to choose a set of variables and weights to make sense of "how privileged am I" or "how lucky am I".

I have an MVP (but ran out of free LLM assistance), and right now the biggest downside is that if I include several variables, the results tend to be far from the top. And I don't know what to do about this.

For instance, let's say that in "healthcare access", having good public coverage puts you in the top 10% bracket (number made up). Then, if you pick 95% as the reference point for that any weighted average including this will miss on some distance to the top.

So just a weighted average of different questions is not good enough I guess.

We can discuss and workshop it if you want.

I love the sentiment of the post, and tried it myself.

I think a prompt like this makes answers less extreme than what they actually are, because it's like a vibes-based answer instead of a model-based answer. I would be surprised if you are not in the top 1% globally.

I would really enjoy something like this but more model-based, as the GWWC calculator. Does anyone know of something similar? Should I vibe code it and then ask for feedback here?

I tried this myself and I got "you're about 10-15% globally", which I think is a big underestimate.

For context, pp adjusted income is top 2%, I have a PhD (1% globally? less?), live alone in an urban area.

Asking more, a big factor pushing down is that I rent the place that I live in instead of owning it (which, don't get me started on this from a personal finance perspective, but shouldn't be that big of a gap I guess?).

I don't identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.

I agree with you that there's a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.

My problem with this post is that the way of presenting the arguments is like "wake up, I'm right and you are wrong", directed to a group of people that includes people that have never thought about what you're talking about, and people that agree with you.

I also agree that the truth sometimes irritates, but that doesn't mean that if something irritates I should trust it more.

I think there is a problem with the polls showing all the same title

fixed

I feel lumped in with them because you use second person plural. It's not a glitch, it's a direct consequence of how you write.

What I say is: maybe you're right with the pause agenda, I don't know. 

But if you come to a group of people saying "you are just wrong" this is not engaging, and then I feel irritated instead of considering your case.

There's many different people in EA with different takes.

By claiming "you are just wrong" in second person plural you are making it harder to people that are not in the "want to build AI" camp to engage with your object level arguments.

Why don't you defend your point?

I imagine the people that are not part of the AI safety memeplex already could find them convincing. Why not engage with then?

Btw I'm undecided on what the right marginal actions are wrt AI and am trying to form my inside view.

I'm in academia and my plan A is to pivot my research focus to something impactful.

Time will tell though, I'm open to considering other options if they arise.

Load more