JD

Jelle Donders

Founder/organizer @ EA Eindhoven and EA Tilburg
598 karmaJoined Dec 2021Working (0-5 years)Pursuing a graduate degree (e.g. Master's)5175 Loon op Zand, Nederland
www.eaeindhoven.nl

Bio

Participation
8

Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups. 

BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.

How others can help me

How does one robustly set themselves up during their studies and early career for meaningfully contributing to making transformative AI go well?

How can we increase the global capacity for the amount of people working on the most pressing problems?

How I can help others

Community building and setting up new (university) groups.

Comments
48

Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!

Hard to say, but his behavior (and the accounts from other people) seems most consistent with 1.

For clarity, it's on Saturday, not Friday! :)

The board must have thought things through in detail before pulling the trigger, so I'm still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don't.

If not, all of this indeed seems like a very questionable move.

If OP disagrees, they should practice reasoning transparency by clarifying their views

 

OP believes in reasoning transparency, but their reasoning has not been transparent

Regardless of what Open Phil ends up doing, would really appreciate them to at least do this :)

I've shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.

Agreed. In a pinned comment of his he elaborates on why he went for the optimistic tone: 

honestly, when I began this project, I was preparing to make a doomer-style "final warning" video for humanity. but over the last two years of research and editing, my mindset has flipped. it will take a truly apocalyptic event to stop us, and we are more than capable of avoiding those scenarios and eventually reaching transcendent futures. pessimism is everywhere, and to some degree it is understandable. but the case for being optimistic is strong... and being optimistic puts us on the right footing for the upcoming centuries. what say the people??

It seems melodysheep went for a more passive "it's plausible the future will be amazing, so let's hope for that" framing over a more active "both a great, terrible or nonexistent are possible, so let's do what we can to avoid the latter two" framing. A bit of a shame, since it's this call to action where the impact is to be found.

As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you're coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.

You could imagine two communities whose members in practice work on very similar things, but whose culture couldn't be further apart:

  • Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynamics that discourage certain beliefs and questions, including about said social dynamics. Comes across as groupthinky/culty to anyone that isn't immediately on-board.
  • Epistemically humble community that tries to figure out what the most impactful projects are to improve the world, a large fraction of which have tentatively come to the conclusion that AI safety appears very pressing and have subsequently decided to work on this cause area. People are self-aware of the tower of assumptions underlying this conclusion. Social dynamics of the group can be openly discussed. Comes across as truth-seeking.

I think it's possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.

Wouldn't this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I'm happy that at least there's still conferences, groups and meet-ups where these different people are still talking to each other!

There might be an important trade-off here, and it's not clear to me what direction makes more sense.

Load more