Emily Grundy

Researcher @ MIT FutureTech | Ready Research
371 karmaJoined Feb 2019

Comments
19

Thanks so much for this comment, Robert - I appreciate the engagement.

It’s interesting to hear what mistakes you see, and what you’ve experienced as working better.

It sounds like you’re really considering who your audience is – something that I think is crucial. For example, you don’t assume that people (especially those not involved in EA) will be sold by more philosophical arguments. These arguments can work for some, but definitely not everyone. I also agree that having a positive reputation (e.g., being seen as credible and honest) can attract people. Plus, it sounds like you’re cultivating some supportive and cooperative relationships with others which is fantastic. 

I think I have a slightly different take on the role of behaviour in your theory of change – I still see it as being quite central. To me, the impact we have always comes back to behaviour. You may not be using the more philosophical arguments to encourage donations, but it sounds like you’re still trying to get people to support the movement (which can involve some level of behaviour) by setting a positive example - a different technique. I also think that getting the charities themselves to be more impactful (Element #2 of your framework) also involves some important behaviour change elements. E.g., The charities need to be aware that they could increase their impact, be motivated to do it, have the resources to do it and so on. Definitely open to hearing push back on any of that!

It sounds like you've thought through your approach a lot, Robert - thanks again for sharing.

Thanks for the feedback Constance, that's great to know!

Thanks for sharing this! I really appreciated hearing your personal experience and perspective on this. 

I agree that it's important to consider the realistic counterfactual (maybe that term is already implying 'realistic', but just wanted to emphasise it). There's definitely a world in which I could have spent six months doing something that was even better for my career on the whole. But, whether I actually knew what that alternative was or would have actually done it is a different story.

Your message that almost everything is suboptimal is also really insightful. I agree, and think that trying to pursue the 'optimal' path can lead to some anxiety (e.g., "What if I'm not doing the best thing I could be doing?") and sometimes away from action (e.g., "I'm going to say no to this opportunity, because I can imagine something being better / more impactful"). I obviously still think it's worth considering impact and weighing different options against each other, but while always keeping in mind what's realistic (and that what you choose might not optimal in the ideal world).

Thanks again for the reflections, William.

Thanks so much for your comment. I hadn't thought about that perspective in the context of this post, and will spend some more time thinking on this. It definitely wasn't my intention to imply malintent - I actually think the opposite, that almost all this advice is provided with good, positive intentions. 

I appreciate the prompt to to reflect on this!

I agree Nathan, there's definitely a lot of content there for future interviews. I'm sure the interviewers will get tired of me saying, 'Well, when I hiked...'

Yeah I recommend checking that curriculum. I also found it really useful to discuss the content with others (which could be through signing up to the actual AGISF course or finding a reading buddy etc.)

I agree Jenny - I think educational materials, especially those that collate and then walk you through a range of concepts (like AGISF) are really useful. 

Thanks for sharing Yonatan, it's always interesting to hear which barriers are the most salient for different people! I imagine those ones are pretty pervasive, especially with regards to AI safety (I can definitely empathise).

Did you end up writing that intro to EA and would you be able to share it? I'm currently looking for a similar list of examples to use in a talk I'm giving on EA, and it would be useful to read what you ended up with.

Load more