I didn't see any mention of Loretta Mayer's work here. She is testing what seems to be a viable product in several major cities (here's some NYT coverage). Do you see this work as having a different purpose/target market?
(I only skimmed the post — sorry if I missed an obvious reference!)
I didn't read the full post, but the gist of it aligns with what I did as an organizer (started Yale EA):
Low-effort comment!
There are many stories I enjoy despite plot holes because the setting/characters/prose delight me so much that it's fun to imagine what hidden factors could justify the plot holes — I can trust an author so much that I assume they'll explain things later (or that there's a hidden explanation they created for me to discover myself).
Recent examples include Sousou no Frieren (lots of symbolism and emotion to obscure thin worldbuilding, I feel so many feelings that I barely think about the plot) and Moonfall (written like a fable from the pe...
I can speak only for myself, but I treat linkposts like any other post unless the poster provides additional context.
I've linkposted many things I thought were flawed in some respect, but still worth sharing and contemplating; if someone disagreed, I'd want them to downvote me for my poor judgment.
Thanks!
ETFs do sound like a big win. I suppose someone could look at them as "finance solving a problem that finance created" (if the "problem" is e.g. expensive mutual funds). But even the mutual funds may be better than the "state of nature" (people buying individual stocks based on personal preference?). And expensive funds being outpaced by cheaper, better products sounds like finance working the way any competitive market should.
This isn't about your giving per se, but have your views on the moral valence of financial trading changed in any notable ways since you spoke about this on the 80K podcast?
(I have no reason to think your views have changed, but was reading a socialist/anti-finance critique of EA yesterday and thought of your podcast.)
The episode page lacks a transcript, but does include this summary: "There are arguments both that quant trading is socially useful, and that it is socially harmful. Having investigated these, Alex thinks that it is highly likely to be benefi...
My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:
I used to work as a part-time advisor and ops person for a family foundation (no actual staff) that gave away ~$500k annually; they've worked with several other people since then.
Much of my advisory time was spent researching and evaluating fairly small grants (workable for someone in the $20k-100k range), since the foundation's "experimental/non-GiveWell" budget was a small fraction of the total. I think I could have done this work for a group of 10-20 clients of that size at a time if I'd been a full-time advisor.
Again, I’m aware that concrete, impactful projects and people still exist within EA. But in the public sphere accessible to me, their influence and visibility are increasingly diminishing, while indirect high-impact approaches via highly speculative expected value calculations become more prominent and dominant.
This has probably been what many people experienced over the last few years, especially as the rest of the world also started getting into AI.
But I think it's possible to counteract by curating one's own "public sphere" instead.
For example, you coul...
I'd have benefited from that kind of nudge myself! I was aware of 80K for years but never even considered coaching.
From a consequentialist perspective, I think you're better off sticking to digital — it takes a lot of time to sell things online, and you could be using that time for some combination of work and fun that would leave everyone better off (unless you place a very high value on physical manga).
Low-confidence idea: It might help to find some small ritual/mantra that you can use when you donate (or invest, etc.) the money you would have spent on physical manga — something along the lines of "I'm making the right decision" or "this is better for everyone".
Open Philanthropy has published a summary of the conflict of interest of policy we use. (Adding it as another example despite the age of this thread, since I expect people may still reference the thread in the future to find examples of COI policies.)
I think of EA as a broad movement, similar to environmentalism — much smaller, of course, which leads to some natural centralization in terms of e.g. the number of big conferences, but still relatively spread-out and heterogenous in terms of what people think about and work on.
Anything that spans GiveWell, MIRI, and Mercy for Animals already seems broad to me, and that's not accounting for hundreds of university/city meetups around the world (some of which have funding, some of which don't, and which I'm sure host people with a very wide range of vie...
Hi Vasco,
Thanks for asking these questions.
I work on Open Phil's communications team. Regarding how Open Phil thinks about allocating between human and animal interventions, this comment from Emily (the one you linked in your own comment) is the best summary of our current thinking.
When I started Yale's student EA group in 2014, we tried a bit of this (albeit with pharmacies, not grocery stores). IIRC, we got as far as a meeting with CVS's head of corporate social responsibility (CSR), plus a few other conversations.
The companies we spoke to were choosing large, well-known charities. This was partly because of their branding (easier to pick up positive associations from charities people have actually heard of), partly because big charities tend to have highly appealing missions (e.g. St. Jude's, which has used its "free care for chil...
I love that we're still seeing new "writing about my job" (WAMJ?) posts 2.5 years after the initial post, especially for jobs like this one that are on the obscure side (and thus unlikely to be covered by 80,000 Hours, Probably Good, or other career-focused resources).
Thanks for taking the time to share this!
I'm an OP staffer who helped to put the post together. Thanks for the nitpicks!
I suppose I'm asking what's the benefit of this format over individual recommendations?
I see the main benefit as convenience. If I'd asked OP staff to write individual Forum posts, I'd have gotten less interest than I did with "send me a few sentences and you can be part of a larger post". Writing an entire post is a bigger hurdle, and I think some people would feel weird writing a post just a few sentences long (even if the alternative was "no post").
...Why should I put any more w
The Glassdoor numbers are outdated. We share salary information in our job postings; you can see examples here ($84K/year plus a $12k 401k contribution for an Operations Assistant) and here (a variety of roles, almost all of which start at $100k or more per year — search "compensation:" to see details).
Depends on the hobby and how good you are. Some things are relatively easy to monetize (you can teach lessons or do live performances), but even in those cases, you'll be competing with people who do your "hobby" as their job, and you're probably better off doing more of whatever your job is (working extra hours, freelancing...).
The thing I do is play games in tournaments, which is less common that streaming/gigging/etc., so this analysis may be of limited value, but: I've made something like $75,000 playing Magic: the Gathering and Storybook Brawl over th...
Who is your audience for the course? Are you a teacher somewhere?
If you have a guaranteed audience, I think the best starting point would be to look up existing materials of this kind, like the AI courses offered by BlueDot Impact or the curiosity/scout mindset training in the CFAR Handbook. It can be tempting to create all your materials from scratch, but the results rarely live up to what you imagined. (This was my experience trying to write a new version of the EA Handbook from scratch.)
If you don't have a guaranteed audience, you'll want to consider wh...
I'm not sure how many stars you should leave, but I think there are ways to write a review that successfully convey both of:
A very brief sketch of a review for a mediocre vegan restaurant:
"I was happy to find a vegan restaurant in AREA, and I thought it was cool they offered DISH. So I ordered that, as well as OTHER DISHES. Unfortunately, the food wasn't great; I thought OKAY DISH was fine, but BAD DISHES had problems; DESCRIPTION OF PROBLEMS. The service was fine, ETC., ETC.
There are...
I really liked this post and agree with much of it.
I ran the Forum for a while. This involved handling interpersonal conflict. At worst, the conflicts on my plate were things like "an argument between two people" or "someone having a mild breakdown in text form". These are relatively minor issues on the CH scale, but they were among the most stressful elements of my job; I'd get lost for hours trying to write the perfect moderator response, or arguing with someone in DMs about how I'd resolved a situation.
I'd find dealing with CH situations much more stres...
In retrospect it is crazy that I updated so much on only four rejections!
Does giving up after two rejections make me twice as crazy?
(I love the "mistake" vs. "fluke" distinction, and wish I'd thought to use it in my own essay.)
This is an excellent post!
I really like seeing profiles of jobs that are closer to being "entry-level" for classic EA-flavored career tracks, to give people a better sense of what they'll be doing early on (it's common for other things, like the 80K podcast or EAG talks, to be focused on work from more senior people).
Upvoted for pointing out that replying to people is a nice thing to do.
But I disagree with "norm" — I prefer the framing "this is an especially nice thing to do", where "norm" feels more like "you've done something a bit wrong by failing to do this". (How people ;interpret the term will vary, of course, it's possible you meant the former.)
I also try not to use "EA" as a noun. Alternatives I've used in different places:
Speaking as an advisor to the mod team who ran this past some active mods:
This isn't something we'd issue a warning for in this context (describing a third party's actions in a way that doesn't seem aggressive or dismissive). In the context of a direct attack (e.g. "why are you bitching to us about something that doesn't matter?"), it could make a comment seem more aggressive and might (weakly) push us toward more substantial action.
*****
Taking my advisor hat off, I generally prefer for the Forum to be less coarse, and I do see "bitching" as gendered...
I’ve started feeling super guilty and sad about how much I, the EA community, have wasted on supporting my participation in various community building and research endeavors - I’m not really any more capable or competent at doing the things I’ve done than a local American graduate would have been.
I obviously know much less about you than you do. But speaking to my own experiences, the second part of this rings false:
By "their actual prio", which of these do you think they meant (if any)?
I've sometimes had three different areas in mind for these three categories, and have struggled to talk about my own priorities as a result.
A combination of one and three, but hard to say exactly the boundaries. E.g. I think they thought it was the best cause area for themselves (and maybe people in their country) but not everyone globally or something.
I think they may not have really thought about two in-depth, because of the feeling that they "should" care about one and prioritize it, and appeared somewhat guilty or hesitant to share their actual views because they thought they would be judged. They mentioned having spoken to a bunch of others and feeling like that was what everyone else was saying.
It's possible they did think two though (it was a few years ago, so I'm not sure).
I reached out to some of the people working to make this day happen to say a few things: one, thank you for being part of making the world a safer place; two, thank you for following through after it lost all attention from the public; three, thank you for inspiring me to work in the same way.
This is outstanding!
For anyone reading who hasn't tried it, I highly recommend sending nice notes to strangers who do good things; it's a fun way to procrastinate, and it doesn't take long to write a compliment that will make someone happy.
Regarding point (2), you might find this post a good resource, though I'm not sure how much of the advice will be helpful in your circumstances.
A couple of other ideas that come to mind:
In my experience, the term "radical empathy" isn't used very often when people explain these ideas to the public -- I more often see it used as shorthand within the community, as a quick way of referring to concepts that people are already familiar with.
In public communication, I see this kind of thing more often just called "empathy", or referred to in simple terms like "caring for everyone equally", "helping people no matter where they live", etc.
I wrote about getting rejected from jobs at GiveWell and Open Phil in this post.
Other rejections that shaped my career:
I ran a contractor hiring round at CEA, and I tried to both share useful feedback and find work for some of the rejected candidates (at least one of whom wound up doing a bunch of other work for CEA and other orgs as a result).
Given all the work I'd already put into sourcing and interviewing people interested in working for CEA, providing this additional value felt relatively "cheap", and I'd strongly recommend it for other people running hiring rounds in EA and similar spaces (that is, spaces where one person's success is also good for everyone else...
As the person who led the development of that policy (for whatever that's worth), I think the Forum team should be willing to make an exception in this case and allow looser restrictions around political discussion, at least as a test. As Nick noted, the current era isn't so far from qualifying under the kind of exception already mentioned in that post.
(The "Destroy Human Civilization Party" may not exist, but if the world's leading aid funder and AI powerhouse is led by a group whose goals include drastically curtailing global aid and accelerating AI progress with explicit disregard for safety, that's getting into natural EA territory -- even without taking democratic backsliding into account.)