Jelle Donders

University group organizer and other community building projects in the Netherlands @ EA Eindhoven
Working (0-5 years experience)

Bio

Participation
8

Heya, after having finished by bachelor’s in Biomedical Engineering at TU/e I’m now working on various EA related projects in the Netherlands. The main one for this academic year (2022-2023) will be EA Eindhoven, for which me and my co-organizer are part of CEA's University Group Accelerator Program.  I’m a generalist at heart who’s fundamentally driven by a desire to understand how this endlessly fascinating and complex world of ours works, and how we can use this understanding to make the world we pass on a better place.

How others can help me

Advice and support for how we can increase the capacity for the amount of people working on the most pressing problems in continental Europe. If counterfactual impact is what we care about, a lot of potential presumably lies in building up this capacity, rather than redirecting everyone to existing opportunities in the US and UK.

How I can help others

Setting up new university groups and brainstorming on what kind of ambitious projects to embark on once you have a stable group going.

Comments
31

If that's your goal, I think you should try harder to understand why core org EAs currently don't agree with your suggestions, and try to address their cruxes. For this ToC, "upvotes on the EA Forum" is a useless metric--all you should care about is persuading a few people who have already thought about this all a lot. I don't think that your post here is very well optimized for this ToC. 

... I think the arguments it makes are weak (and I've been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)

If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn't there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes. 

As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.

It will take a while to break all of this down, but in the meantime, thank you so much for posting this. This level of introspection is much appreciated.

Does anyone know of good resources for getting better at forecasting, rather than just practicing randomly? I'm really looking forward to this course getting released, but it's still in the works.

Well done! As EA continues to grow across the globe, a central directory with overviews for all active countries/regions might become increasingly valuable. Closest thing to this currently seems to be this.

Sounds like a reasonable decision to me, but I do wonder why the reasoning behind such large and not immediately obvious decisions isn't communicated publicly more often.

let decisions be guided less by what we think looks good, and more by what we think is good

Totally agree, as long as you give people the opportunity to figure out why you think it's good.

Anyway, thanks for clarifying!

Good point, I didn't make clear what I meant with the last sentence. Would this rephrasing make sense to you?

If people are finding out about "EA buying a castle" from Émile Torres or the New Yorker and we can't point to any kind of public statement or justification,  then we're probably doing something wrong

I also agree the content of some of these criticisms wouldn't change even if there were a public post, but I don't think the same applies to people's responses to it. If a reasonable person stumbles across Torres or the New Yorker criticizing EA for buying a castle, they would probably be a lot more forgiving towards EA if they can be pointed to a page on CEA's website that provides an explanation behind the decision, written before any of these criticisms, as opposed to finding a complete lack of records or acknowledgements on (C)EA's side. 

In general, taking reasoning transparency more seriously seems like low hanging fruit for making the communication from EA orgs to both the movement and the public at large more robust, though I might be missing something, in which case I'd love if someone could point it out to me.

Regardless of whether there is an economic argument to be made for this decision, as Nathan Young and others are implying, large expenses being clearly communicated and justified seems like a worthwhile endeavor for the sake of transparency alone. If people are finding out about "EA buying a castle" from Émile Torres or the New Yorker (EDIT: and we can't point to any kind of public statement or justification), then we're probably doing something wrong.

A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

This has indeed always been the case, but I'm glad it is so explicitly pointed out now. The overgeneralization from “FTX/SBF did unethical stuff” to “EA people think the end always justifies the means” is very easy to make for people that are less familiar with EA - or perhaps even SBF fell for this kind of reasoning, though his motivations are speculations for now.

It would probably be for the better to make the faulty nature of “end justify the means” reasoning  (or the distinction between naive and prudent utilitarianism) a core EA cultural norm that people can't miss.

Very glad you're emphasizing that last question! I can easily see the narrative shift from 'SBF/FTX did unethical stuff' to 'EA people think the end always justify the means', even though shallow utilitarian calculus that ignores all second-order effects rarely holds up (e.g. doctors killing patients if they can save more lives by harvesting their organs being normalized would lead to a paranoid dystopia where everyone fears hospitals. Even the purest of utilitarians shouldn't support this).

However, for someone less familiar with EA this overgeneralization is very easy to make, so I think we should be more explicit about refuting this type of reasoning.

Edit: Good to see stuff is being written about this!

Load More