This post outlines methods used by EA Wellington, New Zealand, for setting up a stall at our university clubs week in early 2021. We hope this post is useful for others in easily setting up a good stall at a university clubs event.
The (quite minimal cost) stall was set up so people could vote on which causes they thought were important, from a selection. We had several clear tubes (made from cut cylindrical soda bottles) each assigned to a cause area, and had people vote for one of them (in our case placing one large piece of pasta per person in a tube).
The causes they could vote for were (link to stock images here if useful):
- Animal welfare
- Cancer research
- Global health and poverty
- Climate change
- AI risk
This covers many of the core EA top cause areas.
This setup seems like a good way to increase engagement, and it gets across the idea that our group/movement cares about a range of issues, rather than just one. This also encourages curiosity about how one may reason about various causes. Voting is very low effort and quick, so people were able to easily engage in the activity. This often led to a good further discussion after initial engagement. We also expected people would see the activity from afar and come over to vote, which did occur. Also, if people seemed a bit interested we were able to (nicely/appropriately) call them over to vote.
When someone seemed interested we would ask things like: “Which one of these areas do you think we should put our resources towards?” or “Would you like to vote?” or “What do you think?”. They then voted using a piece of pasta. During or after voting we would start a conversation with them, saying things like:
- “Have you heard of Effective Altruism?”
- “We’re part of a global social movement focused on “how can we do the most good?””
- “We’re a group that tries to think about which problems are the most important to focus on, and then how to allocate our resources towards them.”
- Or we would comment on or ask about their choice of vote.
After this we would answer any questions, which often involved giving a description of what our group does, how often we meet, upcoming events, and giving them some of our merch (listed below). Our group holds weekly discussions about EA topics, usually around a reading proposed by a club member. We also do career guidance events, social events, and occasionally have guest speakers or other event types. At the stall we would sometimes also talk about how impactful effective charities can be compared to others, which could be engaging if people are philanthropically-focused (one good take-away, even for those who might not be more broadly interested, would be the idea that some charities are 100x or more impactful than other charities).
When it seemed like the appropriate time, we’d ask them if they wanted to sign up to keep up to date with what we were doing. The signup form was a Google Form on a laptop, and asked for name, email, student ID number, and, for us, whether they would be interested in hearing more about an introductory EA Fellowship we will be running this semester. We also encouraged people to like our Facebook page, which is kept up-to-date with our events.
- Effective Altruism Wellington bookmarks
- Small EA branded stickers
- Poster for our large upcoming intro event
- Flyer showing upcoming events
Results and Musings
There were many people who came over just to vote, and then walked away, who we think may otherwise not have engaged at all. In practice, this setup seemed pretty good at getting people’s attention, a lot of people seemed to see the activity from afar and then came over, especially when prompted to vote. People seemed to enjoy engaging in the activity, and they seemed to genuinely think and care about where to vote.
Having the people vote for causes often led to useful and targeted conversations. We were able to explicitly talk about cause prioritisation, and how if we have limited resources we should try to use them where they are needed most.
We were initially a bit worried about putting ‘AI risk’ as an option, as we thought it might put people off. But we are glad we did - it seemed to be a draw for many people who knew about AI risk (mainly computer science students). It didn’t seem to be off putting to people who didn’t know or care about AI risk.
A few people were openly negative about the stall, however these were only people who already knew a bit about and disliked EA (which they told us), and came over to talk to us to ask us about our motivations. Their usual theme of argument was ”who are you to tell people how to live their lives?”. Somewhat surprisingly, these conversations felt somewhat productive and warmer by the end, given the initial confrontational nature of those few people who came over for this. Everyone else seemed to feel pretty positively about the stall.
Votes and signups:
- Day 1: Around 130 votes, and 37 sign ups
- Day 2: Around 130 votes, and 35 sign ups
- Day 3: Around 50 votes, and 8 sign ups
What did people vote for? Well... who cares, really (as our focus was engagement and signups)? But:
‘Global health and poverty’ and ‘Climate change’ were very popular options, each getting around 5 times as many votes as any of the others. ‘Animal welfare’ was usually just ahead of ‘AI risk’. ‘Biorisk’ and ‘Cancer research’ had very few votes. We did not tally exact vote numbers.
Climate change was the most popular option overall. The main campus generally had the most votes for ‘Climate change’, but the campus which catered only for law and commerce students (which also had far fewer students around) had far more votes for ‘Global health and poverty’ than for ‘Climate change’. A large portion (maybe >50%) of people who voted for climate change made comments as we talked to them about how it was linked to everything else, and especially linked to global health and poverty.
We believe this setup was extremely useful for engagement, and for getting across the most important ideas about EA to those who might be interested in joining the club. We are very likely to use it again in future years as it is positive, cheap, engaging, fun, leads to quite a few sign-ups (we think), and clearly gets the key messages across. Those who did not join were still engaged and were asked to think about these ideas, and may still have had some useful musings (e.g. about charity effectiveness, limited resource allocation, or further considering the given causes), which we think is still important. We are still very interested in getting a larger portion of people who voted to sign up in the future. We hope you found this post useful for your own stall setup.
Thanks for writing this up and sharing it! People might also be interested in this post on the same topic, and this guide.
Hey guys, awesome work from what I've heard. What was your response to "Who are you to tell people how to live their lives?" and pre-existing negative ideas about EA? It seems quite positive to know you ended those conversations warmly, and would be interesting to know more about what you said.
(feel free to link if it was purely following something apart written up though)
I guess my approach was something like the following:
1. Treat it as an opportunity to make connection with the person (not like you're trying to debate or "convert" them).
2. Be curious about their experience. This includes asking questions to get more to the heart of what they feel is wrong with EA, letting them voice it, and repeating back to them some of what you heard so they feel understood.
3. Give my own personal account, rather than "trying to represent EA". Hopefully this humanizes me, and EA by association, in their eyes.
4. Look out for where they might have misconceptions about EA, or negative gut feelings, that would lead to their concerns, and focus somewhat on reframing that conception.
One interaction went something like the following, with person (P) and myself (M):
P: Why are you guys here?
M: We're here to try to figure out how to improve the world!
P: Oh I've heard of effective altruism before. But like, what gives you the right to tell people how to live their lives?
M: (sensing from tone, making eye contact, and sincerely curious) Oh are you concerned that an empirical approach to these questions might not be the best way to help people?
P: Well, like, EA just tells people what to do right?
M: I guess I think of it more like we only have limited resources, and IF WE WANT to help people we should think carefully about the OPPORTUNITY and use evidence to try to help them.
P: Hmm (body language eases).
M: Yeah... Like, I think it's important to make sure when we are trying to improve the world that what we're doing is actually going to help.
M: Yeah... would you like a lollipop?
P: Uh nah I think I have to go. Thanks.
M: No worries. Nice to meet you.
P: Yeah you too.
The above isn't revelatory. But it seemed to me that the person's conception shifted from EA as a kind of enemy entity, more towards seeing EAs as friendly people who it's possible to make personal connections with, and who want to make the world better using evidence.
In another example, we were curious about someone else who came up, voted for AI, and said something like "Oh I'm not really on board with this whole approach" and went to leave. We asked them something like "Oh really, what don't you like?" It turned out that they knew HEAPS about EA, and our curiosity about their points of disagreement led to a really fun discussion, to the point where they said "well it's nice to see you guys on campus" near the end, and where we wanted to keep talking to them.
Cool and smart set up! I'm a bit surprised that cancer research and biorisk had very few votes (I thought cancer research would be a bit more popular among non-EAs).
Regarding biorisk, I think labelling it "Biosecurity and Pandemic Preparedness" would cause more people to consider voting for it, which should be better.