Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
How does one robustly set themselves up during their studies and early career for meaningfully contributing to making transformative AI go well?
How can we increase the global capacity for the amount of people working on the most pressing problems?
Community building and setting up new (university) groups.
If OP disagrees, they should practice reasoning transparency by clarifying their views
OP believes in reasoning transparency, but their reasoning has not been transparent
Regardless of what Open Phil ends up doing, would really appreciate them to at least do this :)
Agreed. In a pinned comment of his he elaborates on why he went for the optimistic tone:
honestly, when I began this project, I was preparing to make a doomer-style "final warning" video for humanity. but over the last two years of research and editing, my mindset has flipped. it will take a truly apocalyptic event to stop us, and we are more than capable of avoiding those scenarios and eventually reaching transcendent futures. pessimism is everywhere, and to some degree it is understandable. but the case for being optimistic is strong... and being optimistic puts us on the right footing for the upcoming centuries. what say the people??
It seems melodysheep went for a more passive "it's plausible the future will be amazing, so let's hope for that" framing over a more active "both a great, terrible or nonexistent are possible, so let's do what we can to avoid the latter two" framing. A bit of a shame, since it's this call to action where the impact is to be found.
As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you're coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.
You could imagine two communities whose members in practice work on very similar things, but whose culture couldn't be further apart:
I think it's possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.
Wouldn't this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I'm happy that at least there's still conferences, groups and meet-ups where these different people are still talking to each other!
There might be an important trade-off here, and it's not clear to me what direction makes more sense.
How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!
One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?
The board must have thought things through in detail before pulling the trigger, so I'm still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don't.
If not, all of this indeed seems like a very questionable move.