Justis

Comments

How do you, personally, experience "EA motivation"?

When I was young I felt like "Gosh! When I'm older and have a job, I really should use my power as a globally rich person to help those who are much less well off, because that's obviously morally obligatory and this Peter Singer guy makes sense."

When I read Slate Star Codex's "Everything is Commensurable" I thought "Oh right, I suppose now's the time for that, I have more money than I need, and 10% seems about right."

It felt satisfying to be doing something definitive, to have an ironclad excuse for not freaking out about whatever the issue of the day is. "I'm doing my part, anyway."

Then I learned there was a community, was dazzled by how impressive they all were, overjoyed that they wanted to welcome me, and had a strong emotional reaction to want to be a part of it. It was more excitement about the people than the projects. They felt very much like "my people."

Now I don't feel much of anything about it (maybe a touch of pride or annoyance about losing so much money), but I still give my 10% to AMF monthly, and I don't plan to stop, so I guess the earlier surges of emotions did their job.

EA Global Lightning Talks (San Francisco 2018)

I also found his very interesting, though I craved something in a longer format. I could tell he had heftier models for situations where things cancel out less neatly, and I want to see them to see how robust they are! Looking forward to seeing what he's working on at the Global Priorities Institute.

Towards Better EA Career Advice

Test prep tutoring and nowhere-near-the-top programming are both very good for making a living without spending much energy. The Scott Alexander post you and lexande linked has a good description of the relevant considerations for test prep tutoring.

Living in a random non-hub city, programming jobs for the state pay only about $50k/yr to start, but they're easy to get (trial task for one was basically just "make an HTML website with maybe a button that does something") and the expectations tend to be pretty low. I worked one of these as my main source of income until enough EA volunteering became EA freelancing became just barely sufficient to quit the day job and see what happened. I think this route is underappreciated, and the movement's central orgs seem to have a lot more capacity to pay for specific work than to hire full-time, higher prestige employees.

Main downside of a low-stress programming day job is that being in an extremely unambitious environment for 40 hours a week can be psychologically uncomfortable.

Near-Term Effective Altruism Discord

+1. I'm in a very similar position - I make donations to near-term orgs, and am hungry for discussion of that kind. But because I sometimes do work for explicitly long-term and x-risk orgs, it's hard for me to be certain if I qualify under current wording.

Which piece got you more involved in EA?

The piece that got me to take the plunge and start giving 10% was Scott Alexander's Nobody Is Perfect, Everything Is Commensurable.

It convinced me singlehandedly to Try Giving, and I went to my first EA Global and took the pledge a couple years later. Before that, I'd pretty much not heard of EA as a movement at all.

CEA on community building, representativeness, and the EA Summit

I really like the Open Philanthropy Project's way of thinking about this problem:

https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy

The short version (in my understanding):

  1. Split assumptions about the world/target metrics into distinct "buckets".
  2. Do allocation as a two step process: intra-bucket on that bucket's metric, and inter-bucket separately using other sorts of heuristics.

(If you like watching videos rather than reading blog posts, Holden also discussed this approach in his fireside chat at EAG 2018: San Francisco.)

CEA on community building, representativeness, and the EA Summit

Disclosure: I copyedited a draft of this post, and do contract work for CEA more generally

I don't think that longtermism is a consensus view in the movement.

The 2017 EA Survey results had more people saying poverty was the top priority than AI and non-AI far future work combined. Similarly, AMF and GiveWell got by far the most donations in 2016, according to that same survey. While I agree that someone can be a longtermist and think that practicality concerns prioritize near-term good work for now anyway, I don't think this is a very compelling explanation for these survey results.

As a first pass heuristic, I think EA leadership would guess correctly about community-held views more often if they held the belief "the modal EA-identifying person cares most about solving suffering that is happening in the world right now."