sky

I write about ideas and resources I like, along with some ideas for making concepts more clear to me and others who come from a humanities background but are digging into the tools EA gives us to build stuff we care about.

I got interested in EA via GiveWell when it started and got a bit more involved in EA when 80K started. I ascribe to the "keep your identity small" idea and see EA as a really useful set of tools and important questions, though not the only set of tools and important questions someone might consider when doing good. I'm a member of EA DC.

I'm also a Community Liaison at CEA (www.centreforeffectivealtruism.org/team)

Outside of EA, I'm involved in the Deaf community and interpreting field/higher ed. I'm generally interested in how people learn what they learn, how they we effectively relate to ourselves and each other, and how to apply those ideas to mentoring and resolving conflict. Fun things = acro-yoga, cross fit, 1:1 conversations about ideas, reading while laying in hammocks, scuba

Comments

How have you become more (or less) engaged with EA in the last year?

Joey, could you say more what you mean by "concepts...that connect to impact"? I'm interested in examples you're thinking of. And whether you're looking for advances on those examples or new/different concepts?

EricHerboso's Shortform

Quick meta comment: Thanks for explaining your downvote; I think that's helpful practice in general

sky's Shortform

Quick thoughts on turning percentages back into people

Occasionally, I experiment with different ways to grok probabilities and statistics for myself, starting from the basics. It also involves paying attention to my emotions, and imagining how different explanations would work for different students. (I'm often a mentor/workshop presenter for college students). If your brain is like mine or you like seeing how other people's brains work, this may be of interest.

One trick that has worked well for me is turning %s back into people

Example: I think my Project X can solve a problem for more people than it's currently doing. I have a survey (N=1200) which says I'm currently solving a problem for 1% of the people impacted by Issue X. I think I can definitely make that number go up. Also, I really want that number to go up; 1% seems so paltry.

I might start with:Ok, how likely do I think it is that 1% could go up to 5%, 10%, 20%?

But I think this is the wrong question to start with for me. I want to inform my intuitions about what is likely or probable, but this all feels super hypothetical. I know I'm going to want to say 20%, because I have a bunch of ideas and 20% is still low! The %s here feel too fuzzy to ground me in reality.

Alternative: Turn 1% of 1200 back into 12 people

This is 12 people who say they are positively impacted by Project X.

This helps me remember that no one is a statistic. (A post which may have inspired this idea to begin with). So, yay, 12 people!

But going from 1% to 5% still sounds unambitious and unsatisfying. I like ambitious, tenacious, hopeful goals when it comes to people getting the solutions they're looking for. That's the whole point of the project, after all. Sometimes, I can physically feel the stress over this tension. I want this number to be 100%! I want the problem solved-solved, not kinda-solved.

At this point, maybe I could remind myself or a student that "shoulding at the universe" is a recipe for frustration. I love that concept, and sometimes it works. But often, that's just another way of shoulding at myself. The fact remains that I don't want to be less ambitious about solving problems that I know are real problems for real people.

I try the percents-to-people technique again:

  • Turn 5% of 1200 back into 60 people. Oh. That's 48 additional people. Also notice: it's only 60 people if we're talking about 48 additional people, while losing 0.
  • Turn 10% back into 120 people. 108 additional people, while losing 0.
  • Turn 20% back into 240 people. 228 additional people, while losing 0.
  • So, an increase of 5% or 20% is the difference between 48 or 228 additional people reached. I know about this program because I work on it, and I know how much goes into Project X right now to reach 12 people. I'm sure there are things we could do differently, but are they different enough to reach 228+ additional people?

Now this feels different. It's humbling. But it piques my curiosity again instead of my frustration: how would we attempt that? Could we?

  • What else do I need to know, to figure out if 60 or 120 or 240 (...or 1000, or 10000) is anywhere within the realm of possibilities for me?
  • Do I have a clear idea about what my bottlenecks or mistakes are in the status quo, such that I think there are 48 more people to reach (while still reaching the 12)? What processes would need to change, and how much?
  • This immediately brings up the response, "That depends on how long I have." (Woot, now I've just grokked why it's useful to time-bound examples for comparison's sake). We could call it 1 year, or 3, or 10, etc. I personally think 1-3 years is usually easier to conceptualize and operationalize.
  • Also, whatever I do next, it's obviously going to take notable effort. I know I can only do so much work in a day. (I probably hate this truth the most. This is definitely where I remind myself not to should at the universe). Now I wonder, is this definitely the program where I want to focus my effort for a while? Why? What if there are problems upstream of this one that I could put my effort toward instead? ...aha, now my understanding of why people care about cause prioritization just got deeper and more personally intuitive. This is a topic for another post.

To return to percentages, here's one more example. Percentages can also feel daunting instead of unambitious:

  • Going from 12 to 60 people is a 400% increase. (Right? I haven't miscalculated something basic? Yes, that's right; thank you, online calculators). 400%! Is that madness?
  • Turn '400% increase' back into 4 additional people reached, for every 1 person reached now.

That may still be daunting. But it may be easier to make estimates or compare my intuitions about different action plans this way.

If you (or your students) are like me, this is a useful approach. It gets me into the headspace of imagining creative possibilities to solve problem X, while still grounding myself within some concrete parameters rather than losing myself to shoulding.

sky's Shortform

Webinar tomorrow: exploring solutions journalism [for EA writers]:

If EA journalists and writers are planning to cover EA topics, I think a solutions journalism angle will usually be the most natural fit.

The Solutions Journalism Network "train[s] and connect[s] journalists to cover what’s missing in today’s news: how people are responding to problems."

The Solutions Journalism Network is having a webinar tomorrow: https://zoom.us/webinar/register/WN_Qcbxqd-uRvyvy1OnvVaIPg

Solutions journalism

"Can be character-driven, but focuses in-depth on a response to a problem and how the response works in meaningful detail

  • Focuses on effectiveness, not good intentions, presenting available evidence of results
  • Discusses the limitations of the approach
  • Seeks to provide insight that others can use"

This is still a less common media angle. The quality of coverage will clearly still vary a lot depending on one's research, editorial input, etc, but this is a better fit than many other media angles one could take to cover topics of interest to you in EA.

More info on this type of journalism: https://www.solutionsjournalism.org/

EAGxVirtual Unconference (Saturday, June 20th 2020)

Definitely, I think for many people, the donations example works. And I like the firefighter example too, especially if someone has had first responder experience or has been in an emergency.

I'm curious what happens if one starts with a toy problem that arises from or feels directly applicable to a true conundrum in the listener's own daily life, to illustrate that prioritization between pressing problems is something we are always doing, because we are finite beings who often have pressing problems! I think when I started learning about EA via donation examples, I made the error of categorizing EA as only useful for special cases, such as when someone has 'extra' resources to donate. So, GiveWell sounded like a useful source of the 'the right answer' on a narrow problem like finding recommended charities, which gave me a limited view of what EA was for and didn't grab me much. I came to EA via GiveWell rather than reading any of the philosophy, which probably would have helped me understand the basis for what they were doing better :).

When I was faced with real life trade-offs that I really did not want to make but knew that I must, and someone walked me through an EA analysis of it, EA suddenly seemed much more legible and useful to me.

Have you seen your students pick up on the prioritization ideas right away, or find it useful to use EA analysis on problems in their own life?

EAGxVirtual Unconference (Saturday, June 20th 2020)

I'm excited about this! I actually came here to see if someone had already covered this or if I should ☺️. I'd love to see a teacher walk through this.

Here's an idea I'd been curious to try out talking or teaching about EA, but haven't yet. I'd be curious if you've tried it or want to (very happy to see someone else take the idea off my hands). I think we often skim over a key idea too fast -- that we each have finite resources and so does humanity. That's what makes prioritization and willingness to name the trade offs we're going to make such an important tool. I know I personally nodded along at the idea of finite resources at first, but it's easy to carry along with the S1 sense that there will be more X somewhere that could solve hard trade-offs we don't want to make. I wonder if starting the conversation there would work better for many people than e.g. starting with cost-effectiveness. Common sense examples like having limited hours in the day or a finite family budget and needing to choose between things that are really important to you but don't all fit is an idea that I think makes sense to many people, and starting with this familiar building block could be a better foundation for understanding or attempting their own EA analysis.

Call notes with Johns Hopkins CHS

I also found this helpful -- appreciate it

Racial Demographics at Longtermist Organizations

Thanks for adding that resource, Anon.

Racial Demographics at Longtermist Organizations

Thanks for doing this analysis! My project plans for 2020 (at CEA) include more efforts to analyze and address the impacts of diversity efforts in EA.

I'd be interested in being in touch with the author if they're open to it, and with others who have ideas, questions, relevant analysis, plans, concerns, etc.

I'm hopeful that EAs, like the author and commenters here, can thoughtfully identify or develop effective diversity efforts. I think we can take wise actions that avoid common pitfalls, so that EA is strong and flexible enough as a field to be a good "home base" for highly altruistic, highly analytical people from many backgrounds. I'm looking forward to continued collaboration with y'all, if you'd like to be in touch: sky@centreea.org.

What posts do you want someone to write?
Answer by skyMar 29, 20203

Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/isn't relevant. Bonus for points for resources for talking about this with other people.

I'd like to have more "Intro to EA" convos that start with, "When I'm prioritizing values like [X, Y, Z], I've found EA really helpful. It'd be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/When do you want to prioritize that? How would you explore that?"

I think personal stories here would be illustrative.

Load More