2335 karmaJoined Mar 2017


Was community director of EA Netherlands, had to quit due to long covid

I have a background in philosophy,risk analysis, and moral psychology. I also did some x-risk research.


Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.

This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.

I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.

Although I agree with pretty much all he writes, I feel like a crucial piece on the FTX case is missing: it's not only the failure of some individuals to exercise reasonable humility and abide by common sense virtues. It's also a failure by the community, its infrastructure, and processes to identify and correct this problem.

(The section on SBF starts with "When EAs Have Too Much Confidence".)

What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?

So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)

  1. Values safety, but values personal status & power more
  2. Values safety, but believes he needs to be in control of everything & has a messiah complex
  3. Doesn't really care about safety, it was all empty talk
  4. Something else

I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.

Good luck!

Quick thoughts:

  • I think another URL (e.g. "forum.animal advocacy.com") would be more accessible?
  • A pinned post on the AA Forum explaining the initiative and what FAST is might be helpful

This is, unless you specifically want to keep it for FAST members

Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.

A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.

I know! I wanted to tag Jan Willem van Putten but didn't know how to do that (on mobile)

I was surprised to see that the Finance position is volunteer. It seems not in line with the responsibilities?

Load more