JD

Jelle Donders

Founder/organizer @ EA Eindhoven and EA Tilburg
509 karmaJoined Dec 2021Working (0-5 years)Pursuing a graduate degree (e.g. Master's)5175 Loon op Zand, Nederland
www.eaeindhoven.nl

Bio

Participation
8

Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups. 

BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.

How others can help me

How does one robustly set themselves up during their studies and early career for meaningfully contributing to making transformative AI go well?

How can we increase the global capacity for the amount of people working on the most pressing problems?

How I can help others

Community building and setting up new (university) groups.

Comments
42

Agreed. In a pinned comment of his he elaborates on why he went for the optimistic tone: 

honestly, when I began this project, I was preparing to make a doomer-style "final warning" video for humanity. but over the last two years of research and editing, my mindset has flipped. it will take a truly apocalyptic event to stop us, and we are more than capable of avoiding those scenarios and eventually reaching transcendent futures. pessimism is everywhere, and to some degree it is understandable. but the case for being optimistic is strong... and being optimistic puts us on the right footing for the upcoming centuries. what say the people??

It seems melodysheep went for a more passive "it's plausible the future will be amazing, so let's hope for that" framing over a more active "both a great, terrible or nonexistent are possible, so let's do what we can to avoid the latter two" framing. A bit of a shame, since it's this call to action where the impact is to be found.

As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you're coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.

You could imagine two communities whose members in practice work on very similar things, but whose culture couldn't be further apart:

  • Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynamics that discourage certain beliefs and questions, including about said social dynamics. Comes across as groupthinky/culty to anyone that isn't immediately on-board.
  • Epistemically humble community that tries to figure out what the most impactful projects are to improve the world, a large fraction of which have tentatively come to the conclusion that AI safety appears very pressing and have subsequently decided to work on this cause area. People are self-aware of the tower of assumptions underlying this conclusion. Social dynamics of the group can be openly discussed. Comes across as truth-seeking.

I think it's possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.

Wouldn't this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I'm happy that at least there's still conferences, groups and meet-ups where these different people are still talking to each other!

There might be an important trade-off here, and it's not clear to me what direction makes more sense.

Here's the EAG London talk that Toby gave on this topic (maybe link it in the post?).

How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!

One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?

Somewhat sceptical of this, mainly because of the first 2 counterarguments mentioned:

  • In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting. 
  • We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.

Focusing on the underlying search for what is most impactful seems a lot more robust than focusing on the main opportunity this search currently nets. An EA/longtermist is likely to take x-risk seriously as long as this is indeed a top priority, but you can't flip this. The ability of the people working on the world's most pressing problems updating on what is most impactful to work on (arguable the core of what makes EA 'work') would decline without any impact-driven meta framework.

An "x-risk first" frame could quickly become more culty/dogmatic and less epistemically rigorous, especially if it's paired with a lower resolution understanding of the arguments and assumptions for taking x-risk reduction (especially) seriously, less comparison with and dialogue between different cause areas, and less of a drive for keeping your eyes and ears open for impactful opportunities outside of the thing you're currently working on, all of which seems hard to avoid.

It definitely makes sense to give x-risk reduction a prominent place in EA/longtermist outreach, and I think it's important to emphasize that you don't need to "buy into EA" to take a cause area seriously and contribute to it. We should probably also build more bridges to communities that form natural allies. But I think this can (and should) be done while maintaining strong reasoning transparency about what we actually care about and how x-risk reduction fits in our chain of reasoning. A fundamental shift in framing seems quite rash.

EDIT: 

More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.

Agreed that more experimentation would be welcome though!

I really want to create an environment in my EA groups that's high in what is labelled "psychological safety" here, but it's hard to make this felt known to others, especially in larger groups. The best I've got is to just explicitly state the kind of environment I would like to create, but I feel like there's more I could do. Any suggestions?

What do the recent developments mean for AI safety career paths? I'm in the process of shifting my career plans toward 'trying to robustly set myself up for meaningfully contributing to making transformative AI go well' (whatever that means), but everything is developing so rapidly now and I'm not sure in what direction to update my plans, let alone develop a solid inside view on what the AI(S) ecosystem will look like and what kind of skillset and experience will be most needed several years down the line.

I'm mainly looking into governance and field building (which I'm already involved in) over technical alignment research, though I want to ask this question in a more general sense since I'm guessing it would be helpful for others as well.

Load more