Mental health advocate and autistic nerd with lived experience. Working on my own models of mental health, especially around practical paths to happiness, critique of popular self-help & therapy, and neurodivergent mental health. 50% chance of pivoting to online coaching in 2025.
IG meme page: https://www.instagram.com/neurospicytakes/
I'm looking to:
I can help you by:
I think that "99% business as usual" for several years is still going to be a "good enough" strategy for most people, even if the threat of AI catastrophe or mass unemployment is imminent within the next two decades. The specifics of timelines does not really change my point, but even if "99% of fully-remote jobs will be automatable in roughly 6-8 years", there are several steps between this and most of the human workforce being displaced that I suspect will take another 5-20 years. Even with AGI being achieved, not everything is equally tractable to automate. I suspect that AI-to-hardware-solution timelines may be reasonably slower in progression, e.g. achieving reliable robot automation may continue to be difficult for a number of years.
I love NVC for this. Just to pick one example, instead of expressing moral judgments on actions and decisions as bad or wrong (which can come across as judgmental and put people off of whatever preference you wanted to communicate), making it clear what your value preference is. E.g. rather than saying “violence is wrong,” we might say “I value the resolution of conflicts through safe and peaceful means.”
Another concept I love is based on consent culture applied to information/discussion. Would you like to hear more about X? Are you open to hearing feedback on Y? Discussing Z while I play devil's advocate? When I receive unsolicited advice and "impact interrogation" at EAGx events (pretty much always during ad-hoc or speed meeting convos), it can come across as adversarial and makes me feel unsafe at those conferences.
I hold the same view towards "non-naive" maximization being suboptimal for some people. Further clarification in my other comment.
I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across "ticking time bomb" scenarios that I'm using as a sort of Pascal's mugging (except that there's plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isn't a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.
I don't reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say it's not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesn't directly align with known risks of ticking time bombs?
This is an important question, which I left out because my full answer is extremely nuanced and it isn't central to my intention for this post (to stimulate discussion about the mental health of the community).
Here's a brief version of my response:
A good maximizer would know to take mental health into account and be good at it. However, it's very difficult to guess and figure out what the needs and requirements are for good mental health. Good mental health needs more than "the minimum amount of self-care", and maximizers will always be considering whether they could be doing less self-care. I argue that maximization as a strategy will always be suboptimal when one of these two conditions are present (and I believe they often are): when self-care is less visible and measurable than the other parts of the maximization equation, and if one of the requirements for good mental health includes things that necessarily include not maximizing. For example: embracing failure and imperfection, trusting one's body, giving yourself permission to adjust your social/moral/financial obligations at any time, these are not compatible with any rationality-based maximization. (Wild thought: Maybe they could be compatible with "irrational maximization"?) I believe I can refute pretty much any angle resembling "but the maximizer could just bootstrap based on your criticism and be better/smarter about maximization", but there are too many forms of this to pre-emptively address here.
These two strategies are worlds apart, despite seeming like they have a common interest: treating self-care as a task necessary for impact vs treating impact as an important expression within self-care. I advocate for the second approach, and I believe that for some people, this second approach can lead to greater impact AND greater happiness.
Exploring what's helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically "let us do the heavy lifting and we'll do our best to figure out where your donations will have impact". This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF's picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you "should" think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.
Does anyone know of a low-hassle way to charge invoices for services but it's a third-party charity that gets paid? It could well be an EA charity if that makes it easy. I'm hoping for something slightly more structured than "I'm not receiving any pay for my services but I'm trusting you to donate X amount to this charity instead".
I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?
As someone out of the loop in terms of the contextual specifics of said people/organizations, I think there's a much simpler explanation than those statements being strategic lies. Firstly, those statements resemble expressions of boundaries within a conversation. To oversimplify, they basically translate to "I don't want to talk about EA". This is a difference between literal speak (preferred by autistic people for example) and neurotypical speak, where something that would be bizarre/false if interpreted factually is understood as a contextual boundary which is not about the facts and therefore isn't considered lying.
However, even in a more literal sense, I don't think those statements are necessarily false, if you take into account the fact that EA is "everything" to some EAs and "just a speck" to others. If I think maths is the most important thing in the world and I belong to a community with some degree of agreement, then it's easy for me to start accusing people adjacent or on the borders of the community for downplaying the importance of math. Every time someone mentions something math-adjacent but not in a maths-worshipping tone, they're being disingenuous. But this would be a fallacy where me equating lack of worship for lying says more about my world view than anything else.
A personal example: I identified as an EA for a few years and now I would consider myself "post-EA", if such a term existed. Both things are possible, that I invested a lot into EA and was inspired significantly by it, and simultaneously that I find relatively few tools moving forwards to be worth attributing to EA conversationally or philosophically. EA isn't "one consistent thing" and it's certainly not everything. For example, ranking charities is very EA, but it also exists outside EA, so even if my exposure was through EA, it doesn't necessarily make sense for me to acknowledge EA in a conversation about charity efficiency. The EA-ness of it doesn't mean anything to non-EAs, and it barely means anything to me having integrated the perspectives I want to keep vs discard.