I feel pretty strongly against "weird hugboxing" but I think the main negative effect is an erosion of our own epistemic standards and a reduction in the degree to which we can epistemically defer to one another. I want the EA community to consist of people whose pronouncements I can fully trust, rather than have to wonder if they are saying something because it reflects their considered judgment on that topic or instead because they are signaling good faith, "steelmanning", etc.
I think an inviting form of decoupling norms where it's fractured in chains. I don't think decoupling norms work when both parties don't opt-in and so people should switch to the dominant norm of the sphere. An illustrative example is as follows:
Some EAs would see this as being a motte-and-bailey instead of getting to the crux but cruxes can be asymmetric in that different critics combine claims together (e.g. the "woke" combining with more centrist sensibilities deontologists). But I think explanations which are done well are persuasive because they reframe truth-seeking ideas within accessible language that dissolve cruxes to seek agreement and cooperation.
Another illustration on the macro-level of the comparative:
To be clear, there are harms with trying to be persuasive (e.g. sophistry, lying, motivated reasoning etc.). But sometimes being persuasive is about speaking the argumentative language of another side.
This is a great comment and I think made me get much more of what you're driving at than the (much terser) top-level comment.
Examples of resources that come to mind:
Again this is predicated on good faith critics.
I think something like 30% hugboxing is good. I think that the cases where you see it maybe it could happen less, but a lot of the time I think we are too brutal to non-rationalist critics.
It's really tiring to criticise and I think it's nice to have someone listen and engage at least a bit. If I move straight to "here is how I disagree" I think I lose out on useful criticism in the long run.
Can you give examples of hugboxing you don't like?
Because my internal response is "people think we are too aggressive/dismissive" rather than "people think we listen to them but in a weird/patronising way" and if you mean internally you don't like it, then I am confused as to why you read it.
On the forum I agree hugboxing is worse.
The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side -- I think there's a good analogy to a SSC post that EAs generally like.
I am unsure that "AI x-risk as a distaction" is a big deal. Like what are their policy proposals, what major actors use this frame?
Great question that prompted a lot of thinking. I think my internal model looks like this:
I'd be very interested to read up a post about your thoughts about this (though I'm not sure what 'ITT' means in this context?) and I'm curious about which SSC post that you're referring to.
I also want to say I'm not sure how universal the 'EAs have been caught so off guard' claim is. Some have sure, but plenty were hoping the the AI risk discussion stays out of the public sphere for exactly this kind of reason.
I always thought the average model for don't let AI Safety enter the mainstream was something like (1) you'll lose credibility and be called a loon and (2) it'll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is "these people aren't so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms".
I think a bunch of people were hesitant about AI safety entering the mainstream because they feared it would severely harm the discussion climate around AI safety (and/or cause it to become a polarized left/right issue).
https://www.alexirpan.com/2024/08/06/switching-to-ai-safety.html
This reaffirms my belief it's more important to look at the cruxes of existing ML researchers than internally within EAs on AI Safety.
A underrated thing with the (post)-rationalists/adjacent is how open with their emotions they are. I really appreciate @richard_ngo 's replacing fear series and just a lot of the older Lesswrong posts about starting a family with looming AI risk. Just really appreciating the personal posting and when debugging comes from a place of openness and emotional generosity.
I think the model of steelmanning EAs have could borrow from competitive debating because it seems really confused as a practice and people mean different things for steelmanning:
I might write about this as a bundle but this imprecision has been bothering me.
I would be interested in a good explainer here! I just wrote a post that probably could have done with me reflecting on what I recommend doing.
Here I'll illustrate the problem with your viewpoint on the ambiguity problem I have. Just going to spitball a bunch of problems I end up asking myself.
Overall, I think a large part of the problem is the phrase "what is your steelman" being goodharted in the same way "I notice I'm confused" is where the original meaning is loss in a miasma of eternal September.
One thing that bothers me about epistemics discourse is that there's this terrible effect of critics picking weak low status EAs as opponents for claims about AI risk and then play credentialism games. I wish there was parity matching of claims in these discussions so they wouldn't collapse.
Impacts being distributed heavy-tailed has a very psychologically harsh effect given the lack of feedback loops in longtermist fields and I wonder what interpersonal norms one could cultivate amongst friends and the community writ-large (loosely held/purely musing etc.):
I think EAs in the FTX era were leaning hard on hard capital (e.g. mentioning no lean season close down) ignoring the social and psychological parts of taking risk and how we can be a community that recognises heavy-tailed distributions without making it worse for those who are not in the heavy-tail.
I notice a lot of internal confusion whenever people talk about macro-level bottlenecks in EA:
To be clear, I understand the counterarguments about marginality and these are exaggerated examples but I do fear at its core that the way EAs defer means we have the worst of both the social planner problem and none of the benefits of the theory of the firm.
I think an inviting form of decoupling norms where it's fractured in chains. I don't think decoupling norms work when both parties don't opt-in and so people should switch to the dominant norm of the sphere. An illustrative example is as follows:
Some EAs would see this as being a motte-and-bailey instead of getting to the crux but cruxes can be asymmetric in that different critics combine claims together (e.g. the "woke" combining with more centrist sensibilities deontologists). But I think explanations which are done well are persuasive because they reframe truth-seeking ideas within accessible language that dissolve cruxes to seek agreement and cooperation.
Another illustration on the macro-level of the comparative:
To be clear, there are harms with trying to be persuasive (e.g. sophistry, lying, motivated reasoning etc.). But sometimes being persuasive is about speaking the argumentative language of another side.