I thought it was interesting that in Will MacAskill's recent posts about decentralising EA, he said that he will avoid giving opening and closing speeches at EAG.

Currently, the process by which speakers are selected for EAG appears opaque to me, and most talks appear to be by 'senior EAs' and 'EA leaders' with high social status in the community.

To tackle the risk of certain individuals being selected based on social status in the community, I think attendees who are accepted should be able to submit blinded applications containing ideas for talks and workshops for EA conferences. The talks and workshops that the conference organisers believe will provide the most value should then be selected.

I think this could be a nice way to achieve greater value from EA conferences, increase the diversity of speakers / workshop hosts and reduce the impression of specific individuals being 'the face' of EA to spread out PR risks and reduce groupthink.

7

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

Isobel here, I work at CEA programming content for EA Globals (but not EAGx’s whose content is planned by the local organising team). Thanks for this suggestion, I think it’s an interesting idea and possibly worth trialling at a lower stakes event (if there’s a low-ish effort version of this) to see if it works.

My comment got quite long (sorry!) so here’s the TL;DR: 
We already have the content suggestion form as a way of openly soliciting EAG content but in general find the best content comes from speakers we proactively reach out to. Therefore, I’m sceptical that having blind open applications would meaningfully improve content quality, but think it could be worth trialling at a lower stakes event.

Before I jump in I thought it might be useful to explain the current process for selecting speakers at EAGs. (Posting this has been on my to-do-list for a while now so thank you for this push to finally write something up!)
 

How we select content:

In vague order of how much I use each mechanism to generate and select content for talks/workshops etc at EA Globals:

  • Calls with trusted advisors
    • These are people I’ve reached out to who I expect to have a good overview of specific cause areas because of their role (e.g. grant making within a certain field). They help me orientate to the current debates, problems and questions within their domain and sometimes provide suggestions or topics that we should try and cover at EA Global.
  • Content advisory board
    • This is a group of 30 trusted advisers covering all major EA cause areas, this typically involves adding speaker recommendations to a spreadsheet
    • This was used more frequently from 2019–2021 and although I have sent solicitations to the board in 2023 most are too busy to add names and I find having an in-depth conversation about the domain area helps me much more to evaluate which speakers to select.
  • Perusing the EA forum, recent grants, relevant institutions, news articles, the internet, academia, etc for interesting and useful content.
  • Content suggestion form
    • This is a form where anyone can propose suggested speakers (including themselves) for EAGs and EAGx’s. We monitor this regularly, although due to time constraints we can only respond if we’re interested in exploring the suggestion further.
    • I would strongly encourage you/anyone reading this post to submit suggestions through this form.

 

Other things I do:

  • I try not to place much weight on people reaching out to me since I think someone being likely to reach out and actively promote their work is uncorrelated with the importance or quality of their work and I don’t want to end up platforming those who shout the loudest.
  • I also try to avoid consulting my CEA colleagues and other EA friends too frequently as although they have plenty of domain expertise over a variety of topics, I don't want to accidentally enforce a norm of nepotism/”who you know over what you know”. This is why I weigh the content suggestions received from the trusted advisors most heavily as they are people who I have intentionally reached out to for their expertise. (Of course they could be nepotistic/groupthinky in their suggestions too, and this is a good nudge to me that I should consider seeking out some trusted advisors less submerged in the EA ecosystem in order to avoid this). I do however listen to acquaintances' suggestions on types of content, possible topics, and new things to try as this feels ~fine?
  • Once I have a topic/question in mind, I try to solicit speakers from underrepresented groups who have expertise in that domain first. We have a bar for the level of competence and expertise needed to go on stage at an EA Global, and consider anyone who is above that bar, first prioritising underrepresented groups.
  • I track who has spoken at previous conferences, favouring new speakers over those who have spoken at previous EAGs. (I do consider the quality of the speaker's previous talks here also e.g. if they give really good talks then we might be happy to have repeat speakers). I also look at EAGx speakers to see if there’s individuals who have given really great/informative talks I might want to book.
     

Response to the suggestion in the post.

1.) I want to push back a little about speakers being selected for being “‘senior EAs' and 'EA leaders' with high social status in the community”. We try to book speakers with expertise in areas that would be of interest/use to EAG attendees. That often correlates with being a senior/"respected" EA but not always, and I am currently trying to book more speakers with less EA experience and more extensive experience outside of the community. In these cases their credentials/past experience is particularly important, especially as they have less EA context so will find it harder to frame their pitch in “EA language”.

2.) I worry a bit that this leads to content where the pushiest wins. Lots of our best content comes from people we have proactively sought out and they're probably not the kind of people who would submit applications to speak at EAG either because a.) gaining EA credibility/status by speaking at EAG isn’t that important to them or b.) they don't realise how useful their expertise might be or c.) they’re very busy doing good work and not submitting blind talk proposals. Whilst speaking at an EAG does have the benefit of some sort of visible endorsement from a central EA org (CEA to be precise), I worry that making speaking opportunities into a competition further pushes this dynamic. I would prefer the benefits of speaking at an EAG to be viewed primarily as transmitting useful/important information and questions to EAG attendees and those watching online after. 

3.) I agree with Jeff’s comment that whether a talk would be worth hosting depends on who would be giving it.

4.) I would be surprised if it increased the diversity of speakers in the traditional sense, from my experience people from underrepresented groups seem less likely to nominate themselves to speak. I would also expect those with diversity of thought to be less likely to nominate themselves as they are likely to be less well-plugged into the EA network. I think both types of diversity are very important when programming content for EAGs and would welcome other suggestions to improve this.

Overall I think this is an interesting idea, and will suggest it to EAGx organisers, but am somewhat sceptical it would be useful for EAGs.

Blinded is a bit tricky: it's often the case that whether a talk would be worth attending depends on who would be giving it.

Yeah on second thought, a lot of EAG talks provide value from the speaker’s personal experiences. I guess partial blinding might be feasible, where applicants can include details about their experiences if these details are going to come up during the talk.

There might be a slight misunderstanding here. I think is saying the selection process  for talks should be blinded, not that you don't know who is presenting when you choose what to attend during the event itself.

Whether a talk is worth hosting is an aggregation of whether it's worth attending, no?

I'm torn on this, because on the one hand I love the accessibility and the de-biasing that comes with this kind of blinding. On the other hand, I think the quality of the talks would go down, if due to nothing else then a sort of regression to the mean scenario. I may be able to write a good proposal for a talk, but that doesn't mean that I am an engaging and charismatic public speaker.

I think I'd be happier with blinding if it is for a journal submission or something in writing, but it is REALLY hard to judge how good a presentation/talk/workshop will be based off of a piece of writing. 

If I am very experienced in running workshops, then I'd want to refer to that in my proposal, but mentioning the previous workshops I've done would de-blind the process.

But I do think that there are decent options that the CEA events team could explore for adding more un-conference aspects to EAGs and EAGxs, such as a certain number of spaces and time slots set aside as "open," and then a whiteboard set up for anyone to sign up for a time slot and a space to offer a workshop.

EDIT: I just read other comments on this post and I realized that I am basically just repeating what Nick Laing has already written. I guess I should have just upvoted that comment rather than writing out my own. Haha.

I think I'd be happier with blinding if it is for a journal submission or something in writing

Another important difference in this case is that the reviewer can evaluate the entire article as it would appear to the audience, while with a conference talk they only have the proposal.

Could the finalists be trialed out at EAGx, local/university group events, or other medium-stakes venues to reduce the risks involved with deciding on written work product rather than the actual talk in question?

Yes, that could work. I think there are a variety of methods that could be used to assess/evaluate potential speakers.

Or the application process could initially only be used for a few slots rather than all EAG speaker slots, and CEA could see how it goes?

(I'm contracting for CEA's events team to work on EAGxNYC)

I like this idea - It'd be nice to hear from a wider range of people in the community, and away to give more people a platform - which would be good for defusing fame in the community.

We're doing a non-blinded version of this for EAGxNYC - I think ~30% of applicants were people who we wouldn't have thought to ask to present - which is good I think. BUT it is riskier or more costly as an event organiser to select them (you don't know if they're good speakers, you have to vet them and their work before deciding etc).

I love the idea of blind speaker selection in principle, but how do you then ensure you are selecting talks from people who are passably good at public speaking? You might get a really interesting outline submitted by someone who gives a really bad delivery or doesn't bother to rehearse.

When the organizers of EAGxBerkeley 2022 were selecting lightning talks, they had prospective speakers send in slides, and then if those were good enough, give the presentation over a video call to someone who was responsible for reviewing all of them and selecting the best presentations. The first part of the process can be anonymized, but the second part can't.

I like the idea, as long as (like is said below) the selection process is rigorous. For example perhaps a handful of speaking/workship slots could be left open to a competition of sorts, judged by the organisers well before the event. The winners could even be coached to make the talks even better (A bit like TED). I'm a little surprised something like this isn't happening already (maybe it is).

Because you need to see people present to judge a talk, I'm not sure blinding can easily work. Perhaps a couple of external speech/presentation experts could be bought in to judge, who were completely unrelated to EA so  they wouldn't know anyone presenting. In any case I would really hope EA types would be able to set aside a decent amount of their bias to judge this kind of thing. 

I think its important though to still have many big name, high status people presenting even if their talks aren't necessarily as good. First this gives the event more gravitas, and helps the excitement and vibe of the event as well. Most of us I'd imagine want to see people we've read and heard before presenting on the big stage.

Again, why would anyone "down" karma vote this post? Not clicking is fine as is disagreeing (even strongly) but there's nothing bad faith about this - I don't get it...

Why just attendees?

My initial thought was to filter out applications from speakers who don’t bring an EA optimiser mindset, but on second thought, it might be good to have speakers from outside the EA bubble.

I advocate using an "Unconference" format such as Open Space twhich would help with this. I mentioned it to CEA previosuly and they use it in some other settings. As do other EA minded conferences I have heard of.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read