Gemma Paterson

Tax Technology - Associate Product Manager @ EY
644 karmaJoined Oct 2022Working (0-5 years)Whitechapel, London, UK



Organiser of the EY Effective Altruism workplace group and EA London Quarterly Review coworking sessions

In my day job, I'm an accountant and currently work as an associate product manager for a tax tech platform.


Topic Contributions

Thank you! That's very kind! 

I feel similarly about finding EA later in my life - I heard about it when I was a few years into my career rather than in university. I'm glad I did because if I'd heard about it in uni, I could imagine it becoming my whole deal. I've got a lot of value from working a normie corporate job first and I'm glad a lot of my friends really don't care about EA at all. 

One of my other half-written drafts is about the benefits of doing graduate training at an employer that churns out dozens of graduates a year rather than a small EA organisation (where the quality of management, mentorship, training and support is more variable). I think the 80k advice on career capital for new grads is great and getting people to think about their long term output (thinking 20-30 years head rather than just 5) is excellent, but I think their ideas for initial first jobs are limited (and so obviously written by cerebral oxford grads who would have access to top of the range opportunities). 

IMO they underrate graduates spending their first few years post-grad joining professions where there are existing networks and professional ethics requirements. Examples would be law/accountancy/engineering/medicine/teaching etc. I think there are downsides (time requirement, skills you might not use later) but I think there are benefits to having a more diverse non-academia EA talent pipeline and I want to spread effective giving into those spaces!! Having the pipeline mostly filled with early start up employees, policy people and management consultants is high risk - none of these roles are accountable to external ethical or professional standards. Plus, having worked in international tax, I now have opinions on potentially high impact tax policy work that isn't obvious to people without that background - I like being able to bring a different perspective.


Good for you on bad criticisms! Keep at it 💪

Hmmm I'm not being as prescriptive as that. Maybe there is a better solution to this specific problem - maybe requiring someone with higher karma to confirm the suggestion? (original person gets the credit)

See also the Payroll Giving (UK) or GAYE - EA Forum ( page which it is the top google result for "Effective Altruism Payroll Giving". It made sense for me to update since I am an accountant and have experience trying to get this done at my workplace. 

Did I need to make a post about something unrelated to do that? 

Should we be making it so difficult for users with an EA forum account to make updates to the forum wikis? 

I imagine the platform vision for the EA forum is to be the "Wikipedia for do-gooders" and make it useful as a resource for people working out the best ways to do good. For example, when you google "Effective Altruism AI Safety" on incognito mode - the first result is the forum topic on AI safety:  AI safety - EA Forum (

I was chatting to @Rusheb about this who has spent the last year upskilling to transition into AI Safety from software development. He had some great ideas for links (ie. new 80k guides, site that had links for newbies or people making the transition from software engineering)

Ideally someone who had this experience and opinions on what would be useful on a landing page for AI Safety should be able to suggest this on the wiki page (like you can do on Wikipedia with the caveat that you can be overruled).  However, he doesn't have the forum karma to do that and the tooltip explaining that was unclear on how to get the karma to do it. 

I have the forum karma to do it but I don't think I should get the credit - I didn't have the AI safety knowledge - he did. In this scenario, the forum has lost out on some free improvements to its wiki plus an engaged user who would feel "bought in". Is there a way to "lend him" my karma? 

I got it from posting about EA Taskmaster which shouldn't make me an authority on AI Safety.

Agree and thanks for writing this up Nick!

I like 80k's recent shift to pushing it's career guide again (what initially brought me into EA - the writing is so good!) and focusing on skill sets.

Really appreciate those diagrams - thanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.  

The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safety - even if it is the most pressing problem. 

Pulling out a quote from MacAskill's comment (since a lot of people won't click)

I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).

In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”. 

One thing I’ll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from “the vibe”. 


What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/rationalist community, because it’s more homegrown. 

If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question “how can we do the most good?” and a portfolio of possible answers to that question, rather than one increasingly common answer — “AI”.

At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that it’s no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree. 

The OECD are currently hiring for a few potentially high-impact roles in the tax policy space:

The Centre for Tax Policy and Administration (CTPA)

  • Executive Assistant to the Director and Office Manager (closes 6th October)
  • Senior programme officer (closes 28th September)
  • Head of Division - Tax Administration and VAT (closes 5th October)
  • Head of Division - Tax Policy and Statistics (closes 5th October)
  • Head of Division - Cross-Border and International Tax (closes 5th October)
  • Team Leader - Tax Inspectors Without Borders (closes 28th September) 

I know less about the impact of these other areas but these look good:

Trade and Agriculture Directorate (TAD)

  • Head of Section, Codes and Schemes - Trade and Agriculture Directorate (closes 25th September)
  • Programme Co-ordinator (closes 25th September)

International Energy Agency (IEA)

  • Clean Energy Technology Analysts (closes 24th September)
  • Modeller and Analyst – Clean Shipping & Aviation (closes 24th September)
  • Analyst & Modeller – Clean Energy Technology Trade (closes 24th September)
  • Data Analyst - Temporary (closes 28-09-2023)

Financial Action Task Force 

I agree and this is why I'm in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn't claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.

Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but "EA" can't - it isn't a person, it's a network. 

I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum ( from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA - communications and clarity around the EA brand are prioritised.

Could you say more about the strong evidence beyond the statements by Bill Gates?

See my other comment

Load more