I’m Jordan, I recently joined as the Content Coordinator on the EA Global team at CEA, and I’d love to hear from the community about what content you’d like to see at future conferences. You can see upcoming conference dates here.
How do we usually select content?
Traditionally, our content selection focuses on:
- Informing attendees about important developments in relevant fields (eg. founders discussing new organisations or projects, researchers sharing their findings)
- Diving deeper into key ideas with experts
- Teaching new skills relevant to EA work
Some recent sessions that were well-received included:
- Panel – When to shut down: Lessons from implementers on winding down projects
- Talk – Neela Saldanha: Improving policy take-up and implementation in scaling programs
- Workshop – Zac Hatfield Dodds: AI Safety under uncertainty
However, we recognise that conference content can (and perhaps should) fulfil many other roles, so your suggestions shouldn’t be constrained by how things have been done in the past.
What kinds of suggestions are we looking for?
We welcome suggestions in various forms:
- Specific speakers: Nominate people who you think would make great speakers (this can be yourself!).
- Topic proposals: Suggest topics that you believe deserve more attention.
- Session format ideas: Propose unique formats that could make sessions more engaging (e.g., discussion roundtables, workshops, debates).
To get an idea of what types of content we’ve had in the past, check out recordings from previous EA Global conferences.
We have limited content slots at our conferences, which means we can't promise to follow up on every suggestion. However, every suggestion helps us better understand what our attendees want to see and can provide jumping-off points for new ideas.
How to Submit Your Suggestions:
- Comment on this post and discuss your ideas with other forum users.
- Fill out this form or email speakers@eaglobal.org if you’d prefer not to post publicly.
Your input can help shape future EAGs to be even more impactful. I look forward to hearing your suggestions!
Nice, thanks for sharing, I'll actually give you a different answer than last time after thinking about this a bit more (and maybe understanding your questions better). :)
> Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?
Not sure that's what you meant, but I don't think the effects of these decay in the sense that they have big short-term impact and negligible longterm impact (this is known as the "ripple in a pond" objection to cluelessness [1]). I think their longterm impact is substantial but that we just have no clue if it's good or bad because that depends on so many longterm factors the people carrying out these short-term interventions ignore and/or can't possibly estimate in an informative non-arbitrary way.
So I don't know how to respond to your first question because it seems it implictly assumes something I find impossible and goes against how causality works in our complex World (?)
> Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall.
Answering the second question:
1. Yes, one could argue that.
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.
And to develop on 1:
> Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?
- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness.
- Some pill helped people like Alice at first but then made their lives worse.
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.
[1] See e.g. James Lenman's and Hilary Greaves' work on cluelessness for rejections of this argument.