I'm a Community Liaison at CEA (

I got interested in EA via GiveWell when it started and later 80K. I am a member of EA DC and joined CEA in 2019. I ascribe to the "keep your identity small" idea and see EA as a really useful set of tools and important questions, though certainly not the only set of tools and important questions someone might consider when doing good.

Outside of EA, I'm involved in the Deaf community and interpreting field/higher ed, and enjoy discussions on personal/professional development, evidence-based pratice, doing acro-yoga and cross fit and mentoring, and reading while laying in hammocks.

sky's Comments

EAGxVirtual Unconference (Saturday, June 20th 2020)

Definitely, I think for many people, the donations example works. And I like the firefighter example too, especially if someone has had first responder experience or has been in an emergency.

I'm curious what happens if one starts with a toy problem that arises from or feels directly applicable to a true conundrum in the listener's own daily life, to illustrate that prioritization between pressing problems is something we are always doing, because we are finite beings who often have pressing problems! I think when I started learning about EA via donation examples, I made the error of categorizing EA as only useful for special cases, such as when someone has 'extra' resources to donate. So, GiveWell sounded like a useful source of the 'the right answer' on a narrow problem like finding recommended charities, which gave me a limited view of what EA was for and didn't grab me much. I came to EA via GiveWell rather than reading any of the philosophy, which probably would have helped me understand the basis for what they were doing better :).

When I was faced with real life trade-offs that I really did not want to make but knew that I must, and someone walked me through an EA analysis of it, EA suddenly seemed much more legible and useful to me.

Have you seen your students pick up on the prioritization ideas right away, or find it useful to use EA analysis on problems in their own life?

EAGxVirtual Unconference (Saturday, June 20th 2020)

I'm excited about this! I actually came here to see if someone had already covered this or if I should ☺️. I'd love to see a teacher walk through this.

Here's an idea I'd been curious to try out talking or teaching about EA, but haven't yet. I'd be curious if you've tried it or want to (very happy to see someone else take the idea off my hands). I think we often skim over a key idea too fast -- that we each have finite resources and so does humanity. That's what makes prioritization and willingness to name the trade offs we're going to make such an important tool. I know I personally nodded along at the idea of finite resources at first, but it's easy to carry along with the S1 sense that there will be more X somewhere that could solve hard trade-offs we don't want to make. I wonder if starting the conversation there would work better for many people than e.g. starting with cost-effectiveness. Common sense examples like having limited hours in the day or a finite family budget and needing to choose between things that are really important to you but don't all fit is an idea that I think makes sense to many people, and starting with this familiar building block could be a better foundation for understanding or attempting their own EA analysis.

Call notes with Johns Hopkins CHS

I also found this helpful -- appreciate it

Racial Demographics at Longtermist Organizations

Thanks for adding that resource, Anon.

Racial Demographics at Longtermist Organizations

Thanks for doing this analysis! My project plans for 2020 (at CEA) include more efforts to analyze and address the impacts of diversity efforts in EA.

I'd be interested in being in touch with the author if they're open to it, and with others who have ideas, questions, relevant analysis, plans, concerns, etc.

I'm hopeful that EAs, like the author and commenters here, can thoughtfully identify or develop effective diversity efforts. I think we can take wise actions that avoid common pitfalls, so that EA is strong and flexible enough as a field to be a good "home base" for highly altruistic, highly analytical people from many backgrounds. I'm looking forward to continued collaboration with y'all, if you'd like to be in touch:

What posts do you want someone to write?
Answer by skyMar 29, 20203

Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/isn't relevant. Bonus for points for resources for talking about this with other people.

I'd like to have more "Intro to EA" convos that start with, "When I'm prioritizing values like [X, Y, Z], I've found EA really helpful. It'd be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/When do you want to prioritize that? How would you explore that?"

I think personal stories here would be illustrative.

sky's Shortform

Should reducing partisanship be a higher priority cause area (for me)?

I think political polarization in the US produces a whole heap of really bad societal/policy outcomes and makes otherwise good policy outcomes ~impossible. It has always seemed relatively important to me, because when things go wrong in the US, they often have global consequences. I haven't put that many of my actual resources here though because it's a draining cause to work on and didn't feel that tractable. I also suspected myself of motivated reasoning: I get deep joy from inter-group cooperation and am very distressed by inter-group conflict.

Then I read things like the thread below and feel like not paying more attention to this is foolish, like I've gone too far in the other direction and underweighted the importance of this barrier to global coordination. I imagine others have written about similar questions and I would be interested in more thoughts.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

Hi Aidan, I'm really late to this thread, but found it interesting. If you don't mind coming back in time, could you clarify this:

"I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets."

I imagine this is true for a lot of EA org staff. It sounded from Howie's comment like it's probably less true for coaches at 80K, though, compared to other EA org staff.

Howie's comment:

"We try to make sure that we talk to the people we think we’re best placed to help with coaching in other ways too, for example some of our advice and many of the connections we can make are particularly valuable for people who don’t already have lots of current links to other effective altruists."

I find the network constrained hypothesis interesting and am interested in exploring it, so I think clarifying our models here seems useful

EA Survey 2019 Series: Community Demographics & Characteristics

I find myself navigating to this page a lot recently, thanks for publishing!

Quick UX request: could you update this post with links to subsequent posts in the series? I'm often hunting around trying to find various pieces of data, and would find that super helpful for user navigation, rather than searching on the title.

The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR)

I think it's worth noting that the acronym for the Athena Center for EA Study is ACES! :)

Load More