Will's list from his recent post has good candidates too:
Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren't significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that "heroic responsibility"/going overboard can be fraught.
Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it's less intellectually stimulating, more stressful, etc; how many E...
Strong agree. There are many more tractable, effective opportunities than people realize. Unfortunately, many of these can't be discussed publicly. I'm hosting an event at EAG NYC on US democracy preservation Saturday at 4pm, and there will be a social near the venue right after at 5. I'd love for conference attendees to join! Details will be on Swapcard.
While I really like the HPMOR quote, I don't really resonate with heroic responsibility, and don't resonate with the "Everything is my fault" framing. Responsibility is a helpful social coordination tool, but it doesn't feel very "real" to me. I try to take the most helpful/impactful actions, even if they don't seem like "my responsibility" (while being cooperative and not unilateral and with reasonable constraints).
I'm sympathetic to taking on heroic responsibility causing harm in certain cases, but I don't see strong enough evidence that it causes ...
Thank you for the kind words Jonas!
Your comment reminded me of another passage from one of my favorite Rob talks, Selflessness and a Life of Love:
..."Another thing about the abolitionist movement is that, if you look at the history of it, it actually took sixty or seventy or eighty years to actually make an effect. And some of the people who started it didn’t live to see the fruits of it. So there’s something about this giving myself to benefit others. I will never see them, I will never meet them, I will never get anything from them, whether that’s people or
Taking uni organizing really seriously was upstream of MATS, EA Courses/Virtual Programs, and BlueDot (shoutout to Dewi) getting started among other things. IMO this work is extremely valuable and heavily under-prioritized in the community compared to research. Group organizing can be quite helpful for training communications skills, entrepreneurship, agency, grit, improved intuitions about theories of change, management, networking/providing value to other people, general organization/ability to get things done, and many other flexible skills that from pe...
I wrote up some arguments for tractability on my forum post about the tractability of electoral politics here. I also agree with this take about neglectedness being an often unhelpful heuristic for figuring out what's most impactful to work on. People I know who have worked on electoral politics have repeatedly found surprising opportunities for impact.
How long does the happiness continue when you're not meditating? A range of times would be helpful
Initially the afterglow would last 30 minutes to a few hours. Over time it's gotten closer to a default state unless various stressors (usually work-related) build up and I don't spend enough time processing them. I've been trading off higher mindfulness to get more work done and am not sure if I'm making the right trade-offs, but I expect it'll become clearer over time as I get more data on how my productivity varies with my mindfulness level.
...How long d
Fair and understandable criticisms. Some quick responses:
1) I've attempted to share resources and pointers that I hope can get people similar benefits for free without signing up for a retreat (like Rob Burbea's retreat videos, Nadia Asparouhova's write-up with meditation instructions, and other content). Since I found most of these after my Jhourney retreat I can't speak from experience about their effectiveness. I'd be excited for more people to experiment and share what does and doesn't work for them, and for people with more experience to share w...
Thanks! You can fill out this form to get notified about future retreats. Their in-person retreats might well be worth doing as well if you're able to, and generate similar results according to their survey. They're more expensive and require taking more time off work. But given their track record I wouldn't be surprised if it was worth the money and time. I have a friend who has done an in-person and online retreat with them and preferred the in-person one.
That said, I have a hard time imagining my experience being as positive doing the retreat in p...
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don't think it's likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. ...
Thanks for the feedback, and I’m sorry for causing that unintended (but foreseeable) reaction. I edited the wording of the original take to address your feedback. My intention for writing this was to encourage others to figure things out independently, share our thinking, and listen to our guts - especially when we disagree with the aforementioned sources of deference about how to do the most good.
I think EAs have done a surprisingly good job at identifying crucial insights, and acting accordingly. EAs also seem unusually willing to explicitly acknow...
Thanks for your comment, and I understand your frustration. I’m still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others I’m not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for mor...
I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I've been meaning to share for a while (which I'll reproduce below), so thank you!
***
I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. That said, I think we're doing extremely well in ...
I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. (That said, I think we're doing extremely well in relative terms, and greatly appreciate the community's willingness to engage in such difficult prioritization.)
Reasons why I think we're not realizing our potential:
I wish this post - and others like it - had more specific details when it comes to these kind of criticisms, and had a more specific statement of what they are really taking issue with, because otherwise it sort of comes across as "I wish EA paid more attention to my object-level concerns" which approximately ~everyone believes.
If the post it's just meant to represent your opinions thats perfectly fine, but I don't really think it changed my mind on its own merits. I also just don't like withholding private evidence, I know there are often good reasons for...
I agree with the substance but not the valence of this post.
I think it's true that EAs have made many mistakes, including me, some of which I've discussed with you :)
But I think that this post is an example of "counting down" when we should also remember the frame of "counting up."
That is — EAs are doing badly in the areas you mentioned because humans are very bad at the areas you mentioned. I don't know of any group where they have actually-correct incentives, reliably drive after truth, get big, complicated, messy questions like cross-cause prioritisatio...
I've been thinking about coup risks more lately so would actually be pretty keen to collaborate or give feedback on any early stuff. There isn't much work on this (for example, none at RAND as far as I can tell).
I think EAs have frequently suffered from a lack of expertise, which causes pain in areas like politics. Almost every EA and AI safety person was way off on the magnitude of change a Trump win would create - gutting USAID easily dwarfs all of EA global health by orders of magnitude. Basically no one took this seriously as a possibility, or at...
A few quick thoughts:
Many arguments about the election’s tractability don’t hinge on the impact of donations.
To add a bit of context in terms of on-the-ground community building, I've been working on EA and AI safety community building at MIT and Harvard for most of the last two years (including now), though I have been more focused on AI safety field-building. I've also been helping out with advising for university EA groups, workshops/retreats for uni group organizers (both EA and AI safety), and organized beginning-of-year residencies at a few universities to support beginning-of-year EA outreach in 2021 and 2022 along with other miscellaneous EA CB projects (e.g. working with the CEA events team last year).
I do agree though that my experience is pretty different from that of regional/city/national group organizers.
I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren't, if anything EA might have more of that effect), so maybe something in the 1:10 - 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?
I don't really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage).
Thank you for all your encouragement over the past few years for students and newer community members to post on the forum, and for actually making it easier and less scary to do so. I definitely would not have felt anywhere near as comfortable getting started without your encouragement and post editing offers. I've replaced Facebook binging with EA Forum binging since I both enjoyed it so much and found it really valuable for my learning. You will be missed, and incredibly hard to replace. Thank you for all your hard work!
Hi Michael, thanks for writing this up! These are important topics, and I'd love to see more discussion of them. Just want to clarify two potential misconceptions: I don’t think it’s no longer hard to get a direct work job, although I do feel reasonably confident that it isn’t as hard to get funding to do direct work as it was a few years ago (either through employment or grants, though I would probably still stand by this statement if we were only considering employment). Secondly, on this part:
...Kuhan mentioned that to it's not easy to get an EA job
Regarding the concern of broad distribution of books being low-impact due to low completion rates/readership/engagement, do you have a sense of how impactful reading groups are for books when coupled with broad distribution? They can have a high initial fixed cost and then pretty low marginal costs for repeated run-throughs (e.g. it takes a long time to make discussion sheets for the first time you run the reading group, but afterwards you have them ready, create breakout rooms, and if you don't participate in them this requires minimal effort/time).
80,000 Hours as a (very thorough) resource for individuals trying to do good/maximize their impact with their careers feels like a big accomplishment. I found EA when I googled "Highest impact careers/how to have the biggest impact with your career", and didn't find anything anywhere near as compelling as 80,000 Hours. I think their counterfactual impact is probably quite massive given how insufficient impact-oriented career advice is outside of 80K (and the broader communities/research/thinking/work that have led to 80K being what it is).
Most of the...
Great points, thanks for commenting Ben! Responding to each of the points:
In my experience, running local group events was like an o-ring process. If you're running a talk, you need to get the marketing right, the operations right, and the follow up right. If you miss any of these, you lose most of the value. This means that having an organiser who is really careful about each stage can dramatically increase the impact of the group. So, I'd highlight 'really caring' as one of the key traits to have.
I think I mostly agree with this (and strongly...
That's very sweet, thank you Jonas! I have been in some conversations about EA essay/idea competitions similar to what you've mentioned, but haven't thought much about it. I think we're also thinking about ideas like hackathons as experimental outreach mechanisms to try out. How do you think something like what you're proposing would compare to the more standard intro EA programming (like intro talks and fellowships)?
Pageviews would also go up a lot if (as suggested in the post) articles from the website were included in intro fellowships/other educational programs. I'll discuss adding these articles/others on the site to our intro syllabi.
One potential concern with adding articles from utilitarianism.net is that many (new-to-EA) people (from experience running many fellowships) have negative views towards utilitarianism (e.g. find it off-putting, think people use it to justify selfish/horrible/misguided actions, think it's too demanding (e.g. implications ...
To clarify/set realistic expectations, much of the growth happened in our second year (2020-2021 academic year), e.g. all the things mentioned in the intro + summary bullets, the first year mostly involved getting 5-10 highly dedicated core organizers and getting SERI started. I also caveat all the things I had going in my favour (including being in the Bay, being on a CBG, and getting lucky with very dedicated and competent co-organizers).
It can be hard to sacrifice career planning/advancement for group organizing purposes, but as I mentioned in my other comment running your group well has lots of career benefits (both from within the EA community, and the skills you develop from becoming a kick-ass organizer :))!
That makes sense, thank you for expanding on the timeline! I also really appreciate your acknowledgment of other factors. My original comment (intentionally) discounted the many other factors that contribute to a group's success, simply because I am confident that my group has a better-than-average mix of factors and so should not be at its current state.
I 100% agree that it's not a binary trade-off and in fact, if someone is potentially interested in community-building as a career, this could be one of the highest-impact things to do. Even if not, I also agree that exclusively maximizing for EA career prospects is not necessarily the best community norm to set!
Thank you for your kind words Miranda! EA group organizing can be quite difficult when others don't see it as potentially highly impactful and the group isn't doing so well - I hope this post can help change how useful EAs (and in particular students) think community building is, and help us do a better job at it so it feels more intuitively impactful and exciting!
The support system for organizers who want to put a lot of effort into their group is getting better and better. I'm always happy to have calls (or texts/emails) with organizers, to discuss how t...
Do you mean two or more people are sharing their screen at the same time? How does that work? We share our screens for group meetings, but I've never heard of screen-sharing during co-working sessions. Also, wouldn't people feel like they are being watched (or that they might show something private) if they are screen-sharing while working?
Yea we allow multiple participant screen-sharing on Zoom, which does run the risk of people seeing something private, but at least for me it really helps me not succumb to distractions, so the risk is worth it. You can't...
Hey Akash! Thanks for your comment, and apologies for my late response!
Let me respond to your individual thoughts:
...1- I'd love to hear more about your decision to go with a career-focused post rather than a donation-focused post. I see how someone changing their career could have an immense impact (especially if they are able to find something impactful that they're also very good at). However, I'm skeptical about the proportion of people who would seriously consider changing their career paths as a result of this. Maybe my forecast is off, though-- I
Here are some of my thoughts on EA residencies/moving people into the full-time EA recruiting pipeline that I shared with Buck:
Bottlenecks
The primary bottlenecks preventing people (who are already interested in EA) from doing high-impact EA work full-time from what I’ve seen in no particular order (based on 2 years running Stanford EA and a few conversations with non-student EAs and community group leaders):
1. Full time EA work, and the transition required feels too costly (in terms of time, money, moving, social costs, preserving optionality,...
The argument sure is a string :P