8 karmaJoined Oct 2022Working (6-15 years)Seeking


I am a Canadian-Chilean designer/illustrator, lecturer, and computer technician who has lived and worked in three different countries on three different continents (in two point five languages!)

Dedicated to supporting and promoting the betterment of humanity; traditionally have done so through volunteerism/political action, lending my creative and communication problem solving skills to advocacy, and now as a member of the Effective Altruism community!

(I also knows a concerning amount of weird facts about pop culture, music, and bizarre history, making me an ideal pub quiz teammate!)

How others can help me

Looking to pivot to learning more (and contribute more directly to) the field of AI and programming alignment, with a focus on visual design and technology.

How I can help others

10+ years of industry-leading empathy & customer experience training, MDES in Visual Communication Design and project management (aka: ask me questions about how to effectively communicate with people, break down knowledge, and repair relationships, as well as how to make things work better in print/digital, systems, and operational areas!)


Thank you so much Lucretia for sharing your experience; as a woman who has worked in Tech for a decade or so, this has put into words a lot of what I peripherally experienced but didn’t quite have a way of expressing to folks outside of the space. (It’s definitely made it hard to explain sometimes to friends who work in other industries, or even well-meaning male friends and colleagues, how these intangible dynamics directly affect how much you feel you need to “put up” with just to be able to do what you love).

I particularly admire the way you’ve analyzed your experience and created this as a resource for helping others navigate the space, recognize patterns of problematic behaviour, and try to bring the community together to generate ideas on how to address it, rather than discouraging women from entering AI/EA as a whole. I’ve bookmarked this post, and am deeply curious to see where your proposed ideas go!

Thank you for sharing! I currently work for Apple and organize charity-based art events, and have been trying to find the best way to maximize my matching and impact (both for volunteered time and financial donations). As such, it’s both interesting and helpful to see how they stack up/get suggestions for how to make the most of this!

I wonder how many of these organizations use Benevity as their matching platform, and if it isn’t possible to get more intermediate groups like GiveWell (that aren’t exactly charities but are working to fund them) listed so that donors don’t have to granularity pick efforts to fund/match themselves?

Hello! New to the forum, though I’ve been lurking the blogs/podcasts/articles around the EA community for a while!

Wondering if there are any other folks out here coming from the design/technology industry space, like I am? Would love to hear from anyone about their journey and finding their niche within the community! :)

This is incredibly interesting and enlightening; thank you!

Particularly love to see the way that these different organizations are looking at each others’ work and ideas and fit-testing them for their own approach and priorities. I’m especially interested in the question of how to measure good better by taking the effectiveness of the implementation into account, since this is where I can foresee a lot of great in-theory approaches diminishing in effectiveness when hit with real-world obstacles like convoluted systems/miscommunication or shifting context, etc.

As such, I even wonder how granular this approach could get; could additional work looking at systems, obstacles, or contexts objectively reveal ways that some interventions with potentially high impact traditionally considered too high-cost might suddenly become more accessible/effective?

Thank you so much for the link! Lots of great stuff here.

Trying to help mitigate economic barriers for attending events and conferences is excellent, as are the acknowledgements of the risk of English-speaking dominance within the community’s leadership; maintaining a genuine curiosity and collaborative mentality to ask communities and underrepresented groups how best to support their participation is also great!

I wonder how EA might avoid the trap that I’ve witnessed a lot in Tech and Industry where the intentions are there/they state they’re committed to these principles, but the actual day-to-day reality doesn’t match up with well-intentioned guidelines (no matter how many “We’re really dedicated to DEI!” Zoom meetings are held).

Is it to apply similar criteria for objective measurement of success in these categories to organizations and bodies within the community as is done for charities and initiatives? Or set transparent and time-specific goals for things like translating and proliferating seminal resources into other languages, diversifying key leadership positions, etc.? (Ex: CEA states they’re current employee make-up is 46% female and 18% self-identified minorities, though it’s not clear how this breaks down within leadership positions, etc.) Is it as simple as discouraging the over-use of technical jargon and Academic language within communications so as to widen the scope of understanding/broaden the audience? (Or something completely different/none of these things?)

Genuine question (rather than critique):

What is the EA Community doing to increase the diversity of its make-up? Are there any resources out there folks can link me to that are actively working on bringing in a plurality of perspectives/backgrounds/etc.?

Considering the scope of existential challenges we’re facing as a species, wouldn’t it stand to reason that looking for ideas for tackling them from a wider array of sources (especially areas outside of STEM, underrepresented populations, or folks outside of the English-speaking world) might offer solutions we wouldn’t otherwise come across?

Thank you for asking this! Some fascinating replies!

A related question:

Considering other existential risks like engineered pandemics, etc., is there an ethical case for continuing to escalate the advancement of AI development despite the possibly-pressing risk of unaligned AGI for addressing/mitigating other risks, such as developing better vaccines, increasing the rate of progress in climate technology research, etc.?