All of kuhanj's Comments + Replies

The argument sure is a string :P

Will's list from his recent post has good candidates too: 

  • AI character[5]
  • AI welfare / digital minds
  • the economic and political rights of AIs
  • AI-driven persuasion and epistemic disruption
  • AI for better reasoning, decision-making and coordination
  • the risk of (AI-enabled) human coups
  • democracy preservation
  • gradual disempowerment
  • biorisk
  • space governance
  • s-risks
  • macrostrategy
  • meta

Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren't significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that "heroic responsibility"/going overboard can be fraught. 

Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it's less intellectually stimulating, more stressful, etc; how many E... (read more)

Strong agree. There are many more tractable, effective opportunities than people realize. Unfortunately, many of these can't be discussed publicly. I'm hosting an event at EAG NYC on US democracy preservation Saturday at 4pm, and there will be a social near the venue right after at 5. I'd love for conference attendees to join! Details will be on Swapcard. 

While I really like the HPMOR quote, I don't really resonate with heroic responsibility, and don't resonate with the "Everything is my fault" framing. Responsibility is a helpful social coordination tool, but it doesn't feel very "real" to me. I try to take the most helpful/impactful actions, even if they don't seem like "my responsibility" (while being cooperative and not unilateral and with reasonable constraints). 

I'm sympathetic to taking on heroic responsibility causing harm in certain cases, but I don't see strong enough evidence that it causes ... (read more)

I'm totally on board with "if the broader world thought more like EAs that would be good", which seems like the thrust of your comment. My claim was limited to the directional advice I would give EAs.

Thank you for the kind words Jonas!

Your comment reminded me of another passage from one of my favorite Rob talks, Selflessness and a Life of Love:

"Another thing about the abolitionist movement is that, if you look at the history of it, it actually took sixty or seventy or eighty years to actually make an effect. And some of the people who started it didn’t live to see the fruits of it. So there’s something about this giving myself to benefit others. I will never see them, I will never meet them, I will never get anything from them, whether that’s people or

... (read more)

Thanks Will! Our first chat back at Stanford in 2019 about how valuable EA community building and university group organizing are played an important role in me deciding to prioritize it over the following several years, and I'm very grateful I did! Thanks for the fantastic advice. :)

Taking uni organizing really seriously was upstream of MATS, EA Courses/Virtual Programs, and BlueDot (shoutout to Dewi) getting started among other things. IMO this work is extremely valuable and heavily under-prioritized in the community compared to research. Group organizing can be quite helpful for training communications skills, entrepreneurship, agency, grit, improved intuitions about theories of change, management, networking/providing value to other people, general organization/ability to get things done, and many other flexible skills that from pe... (read more)

I wrote up some arguments for tractability on my forum post about the tractability of electoral politics here. I also agree with this take about neglectedness being an often unhelpful heuristic for figuring out what's most impactful to work on. People I know who have worked on electoral politics have repeatedly found surprising opportunities for impact. 

Not uncommon, and I'm happy to chat about efforts to change this. (This offer is open to other forum readers too, please feel free to DM me). 

Not that I know of! I can ask if they're open to something in this vein.

3
Vasco Grilo🔸
It would be nice if you could ask. Feel free to follow up here if they reply.

How long does the happiness continue when you're not meditating? A range of times would be helpful

Initially the afterglow would last 30 minutes to a few hours. Over time it's gotten closer to a default state unless various stressors (usually work-related) build up and I don't spend enough time processing them. I've been trading off higher mindfulness to get more work done and am not sure if I'm making the right trade-offs, but I expect it'll become clearer over time as I get more data on how my productivity varies with my mindfulness level. 

How long d

... (read more)
2
Kat Woods 🔶 ⏸️
Amazing! Thank you! 

I intended to distinguish upregulated breathing/controlled hyperventilation like the linked video from (any kind of) meditation with the intention of getting into jhanas. 

2
Kat Woods 🔶 ⏸️
OK, thanks!

Fair and understandable criticisms. Some quick responses: 

1) I've attempted to share resources and pointers that I hope can get people similar benefits for free without signing up for a retreat (like Rob Burbea's retreat videos, Nadia Asparouhova's write-up with meditation instructions, and other content). Since I found most of these after my Jhourney retreat I can't speak from experience about their effectiveness. I'd be excited for more people to experiment and share what does and doesn't work for them, and for people with more experience to share w... (read more)

1
Yarrow Bouchard 🔸
Sorry for the very late reply to an ancient thread. Just want to point out one small thing that is not helping your case: This is a typical pseudoscience/fake medicine line. The doctors and pharma companies want you to be sick! That's their business model!  Doesn't add up. [Edited on Nov. 22, 2025 at 10:15 PM to add: think about the incentives of the researchers who came up with cognitive behavioural therapy or who tested its efficacy. Think about the incentives of the clinical psychology professors or other instructors who teach therapists how to apply techniques like CBT. What are they incentivized to do? Are they incentivized, in any plausible way, to develop of teach techniques that don't work? Skeptical, iconoclastic, anti-establishment hunches like these about science or medicine typically start to look implausible very quickly when you start to look at them more closely.]

Thanks! You can fill out this form to get notified about future retreats. Their in-person retreats might well be worth doing as well if you're able to, and generate similar results according to their survey. They're more expensive and require taking more time off work. But given their track record I wouldn't be surprised if it was worth the money and time. I have a friend who has done an in-person and online retreat with them and preferred the in-person one. 

That said, I have a hard time imagining my experience being as positive doing the retreat in p... (read more)

Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don't think it's likely. 

Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. ... (read more)

Thanks for the feedback, and I’m sorry for causing that unintended (but foreseeable) reaction. I edited the wording of the original take to address your feedback. My intention for writing this was to encourage others to figure things out independently, share our thinking, and listen to our guts - especially when we disagree with the aforementioned sources of deference about how to do the most good. 

I think EAs have done a surprisingly good job at identifying crucial insights, and acting accordingly. EAs also seem unusually willing to explicitly acknow... (read more)

Thanks for your comment, and I understand your frustration. I’m still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others I’m not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.

I wanted to focus on the case for mor... (read more)

I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I've been meaning to share for a while (which I'll reproduce below), so thank you!

***

I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. That said, I think we're doing extremely well in ... (read more)

7
arvomm
Thank you for your comment Kuhanj. I share your belief that the EA movement would benefit from the type of suggestions you outlined on your quick take. I particularly valued seeing more discussions on heuristics, for they are often as limited as they are useful!  Regarding your 'Being slow to re-orient' suggestion, an important nuance comes to mind: movements can equally falter by pivoting too rapidly. When a community glimpses promise in a new X direction, there's a risk of hastily redirecting significant resources, infrastructure, and attention toward it prematurely. The wisdom accumulated through longer reflection and careful evidence collection often contains (at least some) genuine insight, and we should be cautious about abandoning established priorities to chase every emerging "crucial consideration" that surfaces. As ever, the challenge lies in finding that delicate balance between responsiveness and steadfastness — being neither calcified in thinking nor swept away by every new intellectual current.

I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. (That said, I think we're doing extremely well in relative terms, and greatly appreciate the community's willingness to engage in such difficult prioritization.)

Reasons why I think we're not realizing our potential:

  • Not realizing our lack of clarity about how to most impactfully allocate resources (time, money, attention, etc). Relatedly, an
... (read more)
4
Jordan Arel
I very much agree that we need less deference and more people thinking for themselves, especially on cause prioritization. I think this is especially important for people who have high talent/skill in this direction, as I think it can be quite hard to do well. It’s a huge problem that the current system is not great at valuing and incentivizing this type of work, as I think this causes a lot of the potentially highly competent cause prioritization people to go in other directions. I’ve been a huge advocate for this for a long time. I think it is somewhat hard to systematically address, but I’m really glad you are pointing this out and inviting collaboration on your work, I do think concentration of power is extremely neglected and one of the things that most determines how well the future will go (and not just in terms of extinction risk but upside/opportunity cost risks as well.) Going to send you a DM now!

I wish this post - and others like it - had more specific details when it comes to these kind of criticisms, and had a more specific statement of what they are really taking issue with, because otherwise it sort of comes across as "I wish EA paid more attention to my object-level concerns" which approximately ~everyone believes.

If the post it's just meant to represent your opinions thats perfectly fine, but I don't really think it changed my mind on its own merits. I also just don't like withholding private evidence, I know there are often good reasons for... (read more)

I agree with the substance but not the valence of this post.

I think it's true that EAs have made many mistakes, including me, some of which I've discussed with you :)

But I think that this post is an example of "counting down" when we should also remember the frame of "counting up."

That is — EAs are doing badly in the areas you mentioned because humans are very bad at the areas you mentioned. I don't know of any group where they have actually-correct incentives, reliably drive after truth, get big, complicated, messy questions like cross-cause prioritisatio... (read more)

7
calebp
I agree with this take, but I think that this is primarily an agency and "permission to just do things" issue, rather than people not being good at prioritisation. It requires some amount of being willing to be wrong publicly and fail in embarrassing ways to actual go and explore under-explored things, and in general, I think that current EA institutions (including people on the EA forum) don't celebrate people actually getting a bunch of stuff done and testing a bunch of hypothesis (otoh, shutting down your project is celebrated which is good imo - though ~only if you write a forum post about it). I guess overall, I don't think that people are as bottlenecked on how much prioritisation they are doing, but are pretty bottlenecked on "just doing things" and not staring into the void enough to realise that they should move on from their current project or pivot (even when people on the forum will get annoyed with them). In part, these considerations converge because it turns out that many projects are fairly bottlenecked by the quality of execution rather than by the ideas themselves, and actually trying out a project and iterating on it seems to reliably improve projects. In general, my modal advice for EAs has shifted from "try to think hard about what is impactful" towards "consider just doing stuff now" or "suppose you have 10x more agency, what would you do? Maybe do that thing instead?".

I've been thinking about coup risks more lately so would actually be pretty keen to collaborate or give feedback on any early stuff. There isn't much work on this (for example, none at RAND as far as I can tell). 

I think EAs have frequently suffered from a lack of expertise, which causes pain in areas like politics. Almost every EA and AI safety person was way off on the magnitude of change a Trump win would create - gutting USAID easily dwarfs all of EA global health by orders of magnitude. Basically no one took this seriously as a possibility, or at... (read more)

Seems worth trying! I'd be interested in reading a write-up if you decide to run it.

A few quick thoughts: 

Many arguments about the election’s tractability don’t hinge on the impact of donations. 

  • Donating is not the only way to contribute to the election. Here is a public page showing the results of a meta-analysis on the effectiveness of different uses of time to increase turnout (though the number used to estimate the cost-effectiveness of fundraising is not sourced here). The analysis itself is restricted, but people can apply to request access. 
  • Polling and historical data suggest this election has a good chance of b
... (read more)
4
Gil
Ok. Sorry about the tone of the last response, that came off more rude than I would have liked. I do find it unsettling or norm-breaking to withhold information like this, but I guess you have to do what they allow you to do. I remain skeptical.
5
Gil
This number is crazy low. It seems bad to make a Cause Area post on the forum that entirely rests on implausibly low numbers taken from some proprietary data that can’t be shared. You should at least share where you got this data and why we should believe it.

To add a bit of context in terms of on-the-ground community building, I've been working on EA and AI safety community building at MIT and Harvard for most of the last two years (including now), though I have been more focused on AI safety field-building. I've also been helping out with advising for university EA groups, workshops/retreats for uni group organizers (both EA and AI safety), and organized beginning-of-year residencies at a few universities to support beginning-of-year EA outreach in 2021 and 2022 along with other miscellaneous EA CB projects (e.g. working with the CEA events team last year).

I do agree though that my experience is pretty different from that of regional/city/national group organizers.

3
James Herbert
Thanks Kuhan!

Good catch - added that to the eligibility section for the AAAS Rapid Response Cohort in AI blurb. Thanks!

I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren't, if anything EA might have more of that effect), so maybe something in the 1:10 - 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?

I don't really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage). 

5
Quadratic Reciprocity
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.  I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes. 
5
NickLaing
I'm impressed the ratio is that favourable! One note to be careful of is that just because people start of hyped about AI safety doesn't mean they stay there - there's a decent chance they will swing to the dark side of capabilities, as we sore with Open AI and probably others as well. Just making the point that the starting ratio might look more favourable than after a few years.
2
Linch
Thanks, this is helpful!

I think donations in the next 2-3 days would be very useful (probably even more useful than door-knocking and phone-banking if one had to pick) for TV ads, but after that the benefits diminish somewhat steeply over the remaining days.

1
Caro
Kuhan is probably right. However, after speaking to someone on Team Carrick today, it seems like there is still room for funding for the campaign's ads, which are different from the PAC's ads and show more Carrick talking directly to people. So giving now still makes sense (for the next 48 hours) even though the effects are smaller than a few days ago. 

Thank you for all your encouragement over the past few years for students and newer community members to post on the forum, and for actually making it easier and less scary to do so. I definitely would not have felt anywhere near as comfortable getting started without your encouragement and post editing offers. I've replaced Facebook binging with EA Forum binging since I both enjoyed it so much and found it really valuable for my learning. You will be missed, and incredibly hard to replace. Thank you for all your hard work!

Answer by kuhanj29
0
0

Hi Michael, thanks for writing this up! These are important topics, and I'd love to see more discussion of them. Just want to clarify two potential misconceptions: I don’t think it’s no longer hard to get a direct work job, although I do feel reasonably confident that it isn’t as hard to get funding to do direct work as it was a few years ago (either through employment or grants, though I would probably still stand by this statement if we were only considering employment). Secondly, on this part:
 

Kuhan mentioned that to it's not easy to get an EA job

... (read more)

Edited for clarity - it might be a US thing, but I'd encourage others to try it out and see how it goes unless there are strong reasons not to.

8
RyanCarey
It happens in Australian universities. Probably anywhere there's a large centralised campus. Wouldn't work as well in Oxbridge, though, because the teaching areas, and even the libraries, are spread all across the city.

Regarding the concern of broad distribution of books being low-impact due to  low completion rates/readership/engagement, do you have a sense of how impactful reading groups are for books when coupled with broad distribution? They can have a high initial fixed cost and then pretty low marginal costs for repeated run-throughs (e.g. it takes a long time to make discussion sheets for the first time you run the reading group, but afterwards you have them ready, create breakout rooms, and if you don't participate in them this requires minimal effort/time).  

2
BrianTan
I had the assumption that reading groups are much less impactful and lower quality without having a facilitator in each breakout room. Has EA Stanford experimented with reading groups without a trained facilitator? If so, how are these done - do you just give them discussion questions to talk about with each other? Would a participant be assigned as a facilitator per breakout room?
Answer by kuhanj9
0
0

80,000 Hours as a (very thorough) resource for individuals trying to do good/maximize their impact with their careers feels like a big accomplishment. I found EA when I googled "Highest impact careers/how to have the biggest impact with your career", and didn't find anything anywhere near as compelling as 80,000 Hours. I think their counterfactual impact is probably quite massive given how insufficient impact-oriented career advice is outside of 80K (and the broader communities/research/thinking/work that have led to 80K being what it is). 

Most of the... (read more)

Thanks Jake! Stanford EA and I would definitely not be where we are now without your initial mentorship/ motivation, and ongoing guidance and support! I can't thank you enough. :) 

Great points, thanks for commenting Ben!  Responding to each of the points: 

In my experience, running local group events was like an o-ring process. If you're running a talk, you need to get the marketing right, the operations right, and the follow up right. If you miss any of these, you lose most of the value. This means that having an organiser who is really careful about each stage can dramatically increase the impact of the group. So, I'd highlight 'really caring' as one of the key traits to have.

I think I mostly agree with this (and strongly... (read more)

7
Benjamin_Todd
That's great to hear! I should have clarified my points weren't meant as disagreements - I think we're basically on the same page. Yes, I agree. One way to reconcile the two comments is that you need to focus on the 20% of most valuable activities within each aspect (marketing, ops, follow up), but you can't drop any aspect. I also agree that it's likely that 'really focusing on what drives impact' is more important than 'really caring', though I think simply caring and trying can go a fairly long way. On living together, I'm not concerned about living with friends in general (esp for students), just the idea of living 100% with EAs, while EA is also your main thing outside of studying. The more general point is that I think it's valuable to have friendships outside of EA. So, if someone is new to EA, I might encourage them to live with EAs for a few years to make deeper friendships there, but if someone is already heavily involved, I might encourage them to live with people who aren't in EA. The intro to EA talk looks cool! I made some comments on a copy that I've shared with your stanford email address.

That's very sweet, thank you Jonas! I have been in some conversations about EA essay/idea competitions similar to what you've mentioned, but haven't thought much about it. I think we're also thinking about ideas like hackathons as experimental outreach mechanisms to try out. How do you think something like what you're proposing would compare to the more standard intro EA programming (like intro talks and fellowships)?

1
Jonas Hallgren 🔸
One of the bigger parts is probably that it would have a public prize attached to it. I get the feeling from people outside EA  that altruism is charity and nothing that you can actually do a career within. A person has a certain threshold of motivation before digging into EA. I believe this threshold would be easier to get through if you had a potential explicit reward at the end of it (a carrot on a stick). It might also generate some interesting ideas that could be tried out. Essentially, the idea is that it would turbocharge the fellowships as they would have something to apply the ideas of EA to.

Pageviews would also go up a lot if (as suggested in the post) articles from the website were included in intro fellowships/other educational programs. I'll discuss adding these articles/others on the site to our intro syllabi. 

One potential concern with adding articles from utilitarianism.net is that many (new-to-EA) people (from experience running many fellowships) have  negative views towards utilitarianism (e.g. find it off-putting, think people use it to justify selfish/horrible/misguided actions, think it's too demanding (e.g. implications ... (read more)

To clarify/set realistic expectations, much of the growth happened in our second year (2020-2021 academic year), e.g. all the things mentioned in the intro + summary bullets, the first year mostly involved getting 5-10 highly dedicated core organizers and getting SERI started. I also caveat all the things I had going in my favour (including being in the Bay, being on a CBG, and getting lucky with very dedicated and competent co-organizers).

It can be hard to sacrifice career planning/advancement for group organizing purposes, but as I mentioned in my other comment running your group well has lots of career benefits (both from within the EA community, and the skills you develop from becoming a kick-ass organizer :))!

That makes sense, thank you for expanding on the timeline! I also really appreciate your acknowledgment of other factors. My original comment (intentionally) discounted the many other factors that contribute to a group's success, simply because I am confident that my group has a better-than-average mix of factors and so should not be at its current state.

I 100% agree that it's not a binary trade-off and in fact, if someone is potentially interested in community-building as a career, this could be one of the highest-impact things to do. Even if not, I also agree that exclusively maximizing for EA career prospects is not necessarily the best community norm to set!

Thank you for your kind words Miranda! EA group organizing can be quite difficult when others don't see it as potentially highly impactful and the group isn't doing so well - I hope this post can help change how useful EAs (and in particular students) think community building is, and help us do a better job at it so it feels more intuitively impactful and exciting!

The support system for organizers who want to put a lot of effort into their group is getting better and better. I'm always happy to have calls (or texts/emails) with organizers, to discuss how t... (read more)

6
Miranda_Zhang
On board with you there! I think there's a lot of great people already trying to do that, like yourself or Catherine Low, but perhaps to inconsistent effect. This might warrant me sitting down with my group and trying to figure out how we got motivated to organize in the first place. : ) Completely not surprised by your experience re: community building being rewarding. As someone who's been very connected to non-EA communities in the past, I definitely think community-building doesn't need to compromise non-community-building priorities! After all, you're directly shaping the future of the EA community and testing the messaging of EA on-the-ground and building connections throughout. Truly, community organizers are doing so many things all at once. Again, very inspired by your + your group's example. So grateful for all the work you do to publicize your experiences and spread best practices!

Thank you so much Kathryn! I'm inspired by all the work you do for WANBAM/mentorship in EA (which I'd love to build on moving forward, it's one of my top priorities), and everything else you do! :) 

3[anonymous]
Yeah it's James and I funded by EAIF

Strongly agree, I'll add a bullet point on this to the post :)

Answer by kuhanj13
0
0

I just stumbled across this on my Facebook newsfeed eradicator today and it reminded me of the inspiring quotes thread: 

“How wonderful it is that nobody need wait a single moment before beginning to improve the world.”

~ Anne Frank

Ooh I like the changing profile picture idea, can I add that to the post? (I'll give you credit of course)

1
mic
Yeah sure!

Do you mean two or more people are sharing their screen at the same time? How does that work? We share our screens for group meetings, but I've never heard of screen-sharing during co-working sessions. Also, wouldn't people feel like they are being watched (or that they might show something private) if they are screen-sharing while working?

Yea we allow multiple participant screen-sharing on Zoom, which does run the risk of people seeing something private, but at least for me it really helps me not succumb to distractions, so the risk is worth it. You can't... (read more)

Hey Akash! Thanks for your comment, and apologies for my late response!

Let me respond to  your individual thoughts:

1- I'd love to hear more about your decision to go with a career-focused post rather than a donation-focused post. I see how someone changing their career could have an immense impact (especially if they are able to find something impactful that they're also very good at). However, I'm skeptical about the proportion of people who would seriously consider changing their career paths as a result of this. Maybe my forecast is off, though-- I

... (read more)

Here are some of my thoughts on EA residencies/moving people into the full-time EA recruiting pipeline that I shared with Buck: 

Bottlenecks

The primary bottlenecks preventing people (who are already interested in EA) from doing high-impact EA work full-time from what I’ve seen in no particular order (based on 2 years running Stanford EA and a few conversations with non-student EAs and community group leaders):

1. Full time EA work, and the transition required feels too costly (in terms of time, money, moving, social costs, preserving optionality,... (read more)

Load more