Hide table of contents

This post is a follow up to an original post about promoting EA at Microsoft

Tl;dr

Parth’s previous post laid out the rationale for promoting EA at large tech companies: “they employ hundreds of thousands of people, many of whom are financially well-off and want to help the world, but who are not aware of the effective altruism movement. Because there are already many EA folks working at these companies, and because many of these companies already have the culture and infrastructure to encourage/promote giving (ex. events, donation matching), this presents a huge opportunity to promote and build an EA mindset at these companies.”

As part of this, Microsoft EA partnered this year with One for the World, to see what the results would be when a clear call to action was included in their US Employee Giving Campaign (often called 'Give Month') presentations and outreach. The results were excellent, with 13 people (22%) starting a regular donation to GiveWell’s charities, at an annual recurring value of over $38k/year in donations, and 25 attendees (43%) contributing at least once.

Previous years (2018 & 2019)

In previous years, Parth and the team had concentrated on awareness-raising, creating an internal microsite, putting up posters about effective giving, sending out promotional emails and creating a mailing list. In 2019, they put on a talk with representatives from EA Community Projects and GiveWell, which had ~80 people in attendance.

Year 3: 2020

I reached out to the Microsoft team in early 2020 to ask if One for the World could be involved in this year’s events. We believe that One for the World’s very affordable donation amount (1% of income) has potential in corporate spaces, at least in part because it might be an easier idea to promote than the full Giving What We Can pledge. 

I have had issues before (and I know other EA organisations have had the same) with workplaces being very reluctant to ask their employees to talk about giving - so asking them for time at work to promote a 10% pledge seems like a tough sell. However, the 1% pledge seems so intuitively affordable that a lot of people are happy to bring it up with their colleagues. It helps, of course, that Microsoft has a whole month which explicitly licences people to talk to their colleagues about causes, charities and giving.

My email came at a good time, as the team wanted to build on the momentum of the previous two years. We agreed that a good way to do this would be to include a specific call to action where we could measure success (i.e. to donate) and, as part of this, to have a frictionless donation option that could be sent immediately to attendees during presentations (this was done via Benevity, Microsoft’s corporate giving site). We therefore agreed that my colleagues and I would be available for six talks, spread throughout October and covering east and west coast timezones, and that we would deliver a ‘Giving Lunch’ presentation on each occasion.

One for the World adapted the Giving Lunch idea from Ben Clifford at Tyve (arguably, we stole them with Ben’s permission, so hat-tip to Ben). It’s a simple concept - an ‘effective giving 1-0-1’ talk, delivered over a video call, with the incentive of a $10 donation to any charity of each attendee’s choice (funded by One for the World). 

We have been experimenting with these presentations since last summer and they seem to be peculiarly effective at getting people to take action. Something about the presentation seems to get people to the point where they are willing to give some money after only ~45 minutes of engagement. We also work to keep them very interactive, with a ‘pick the most effective intervention’ quiz, in-call polls and lots of audience questions/discussions throughout, which we think is appropriate for a corporate setting (rather than anything confrontational or, conversely, just boring).

Usually, we do a pre- and post-survey with some attitudinal questions, but in this case we only delivered the post-survey, to make the sign up process as seamless as possible. The post-survey does ask if people want to take the One for the World pledge, and we have done some limited email follow up to those people, but the primary mechanism for recruiting donors was posting a link to Benevity’s One for the World page at the time of the talk.

Outcomes

  • We had ~58 people attend across six events, with audience size varying between 2 and 23. In each case, the talk was briefly introduced by someone from Microsoft, who then also supported by putting links into the video call chat. We made sure to give everybody a link to the One for the World page on Microsoft’s Benevity portal, but otherwise there wasn’t much effort at selling - it was just a talk about the principles of giving.
  • In each case, audience participation was lively, with excellent questions, comments and debate. One advantage of Giving Lunches is that they can be delivered to audiences of 1-30. Although the optimal number is 12-15 people, it wasn’t awkward to run the session for just two people in one instance.
  • By 30th November, 25 people (43%) who came to the events had made at least one donation to GiveWell’s charities. Of these, 13 (22%) set up a recurring donation, and one of the one-off donors gave $2k (doubled to $4k). The full figures are below, and in each case they include Microsoft’s 100% match:
Donor typeNumber of donorsMonth 1 valueValue in year 1
Recurring13$3,227.26$38,727.12
One time9$4,788.75$4,788.75
Unspecified3$336.00$336.00
Total25$8,352.01$43,851.87
  • We also saw some (weak) indicators of attitudinal change after the events, with 24 people saying they intended to start giving regularly to charity:
StatementNumber of respondents
I already give regularly to charity33
I do not intend to give regularly to charity1
I am going to start giving regularly to charity24
Total58

 

  • There is some confusing data in the responses, which speaks further to the general weakness of this evidence. 3 people responded that they had ‘already taken the pledge’ but have not set up any donation; and 11 people responded that they intended to take the pledge but also have not set up a donation. We will continue to chase these donors but expect low conversion (they have already had a few reminders).
  • Free-text feedback to the question “Has the OFTW Giving Lunch changed your mind about anything (else) related to giving?” included several references to considering impact in giving, using charity evaluators more and starting to give more often.
  • Although we weren’t able to collect pre- and post-event responses to compare, people selected the following factors for the question “If you do intend to give to charity, regularly or otherwise, what factors will you consider? Please tick all that apply”:
FactorNumber of respondents who selected this option% of respondents who selected this option
Whether the charity operates in my local area/home country

30

51.7%

What percentage of the charity's funds are spent on overheads

31

53.4%

What data the charity has on its impact

45

77.6%

How cost effective the charity is

41

70.7%

The evidence base for the charity's method

30

51.7%

Whether the charity uses a method that resonates with me e.g. because of my work or personal experience

28

48.3%

Whether the charity has affected or helped someone I know

20

34.5%

  • 47/58 (80%) of attendees asked for their $10 reward to be donated to a GiveWell charity.

Reflections

  • These results exceeded our expectations by a significant distance. The workshops are not a hard sell. There is very little in the way of promoting One for the World and I didn’t spend time saying “you should all give money to these charities” or “you should take the pledge”. Yet a really good number of people took action and a decent number set up recurring donations. Based on historic rates of churn, which are probably too pessimistic for payroll donations, we would expect to realise $116k from these donations in the next four years. This is excellent ROI on a fairly small commitment of time and money (between 20:1 and 200:1, depending on how you measure and value people’s time commitment). It also seems to reinforce Parth’s theory that there is a community of people predisposed to like the idea of effective giving - we just need opportunities to explain it to them.
  • Attendance was good and we haven’t had any negative feedback. This reinforces the idea that we can talk about giving without risking professional blowback. However, as mentioned above, Microsoft has a whole month that explicitly licences people to talk to their colleagues about giving, which probably makes things significantly easier. Since Microsoft, we have done similar presentations at Bridgewater (which raised $21k and counting in one-off donations), and a separate event just for LinkedIn staff as part of a company training day, which didn't see any significant take-up/actions. We are also in discussions with people at Google, Facebook and Amazon about running similar events in the future. We will have more confidence in the format and the results once our sample size increases, and in particular we’re interested in seeing any variation between companies/contexts.
  • The average donation from the initial tranche of donors is fairly low. Amounts (before matching) vary from $10/month to $250/month. Comparing these to Microsoft’s typical salaries, we are sceptical that more than four attendees are now giving a full 1% of their income before matching. It’s difficult to say what effect the matching had on this - perhaps some people aimed to contribute an amount equal to 1% of their income once matching was factored in - but even then the amounts seem on the low side.
  • Parth and the team worked really hard to drive attendance. This seems to be a common theme across corporate talks - that colleagues need to ask colleagues to attend. It potentially caps the scale and repeatability of this outreach, unless we can ask new converts to invite people in future years (to avoid getting the same audience again). The team also did an excellent job of bringing in people who are not already EAs, which is critical.
  • The evidence of potential attitudinal change should be treated with caution - the survey was designed by me (I’m not a researcher) and we didn’t get pre-event results as a baseline. For whatever they are worth, though, the outcomes still look reasonably encouraging.
  • It is hard to say whether the outreach would be as effective if it were promoting either a) a larger pledge/commitment; b) different, less ‘mainstream’ cause areas. We would be very interested in hearing from other organisations in this space if you have tried this type of outreach.

Next steps

  • We would like to test the concept further at other companies. If you work at a company where this might work, please reach out.
Comments4


Sorted by Click to highlight new comments since:

Nice work, I'm still struggling with how to involve the IT business. I know that people there are willing to donate, but factory farming isn't a sexy topic for them.  But maybe some workshops on how they can help in different ways, like giving some free tools, is a better way here. Anyway, thanks, that was inspiring.

Hi great report, thanks a lot! I think this is a very inspirational case. Congrats on the results! During presentations to young professionals I always mention your initiative. 

I will reach out to you to exchange experiences.

Thanks Jan - looking forward to hearing from you!

Haha, thanks for the hat tip! Delighted with this outcome! Well done!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI