In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)
I found tagging buggy. I tried to tag something yesterday, and I believe it didn't get through although it worked today. The 'S-risks" tag doesn't show up in my list to tag posts at all, although it's an article. But that might also be something about the difference between tags and articles that I don't understand? I use firefox and didn't check on other browsers.
Is there a consensus for how to use organisation tags? Specifically, is it desirable to have have every output that's ever come out of an organisation to be tagged to them or only e.g. organisational updates? I've seen the first partly, but scarcely, done and am not sure about my opinion. (I mean things like "This report is published on the EA forum and the person who worked on this report was at org X at the time and wrote it as part of their job")
edit: 3) Just adding this on here...Is there a way to tag everything that has one tag with another tag? (I'm speaking of the 'economics' tag + lots of more specific tags; 'moral philosophy' and 'metaethics' etc.)
I'm not a very experienced researcher, but I think in my short research career, I've had my fair share of dealing with self-consciousness. Here are some things I find:
Note that I mostly refer to the "I'm not worth other people's time", "This/I am dumb", "This is bad, others will hate me for it" type of self-consciousness. There might be other types of self-consciousness, e.g. "I'm nervous I'm not doing the optimal thing and feel bad because then more morally horrible things will happen" in a way that's genuinely not related to self-confidence, self-esteem etc. for which my experience will not apply. This is apart from the obvious fact that different things work for different people
Some general thoughts:
Things I do to improve:
In the moment of feeling self-conscious: I would second Jason that talking to others about the object-level is magic
I also have a rule of talking to others about my research/sharing writeups whenever it feels most uncomfortable to do so. Those are often the moments when I'm most hardstuck because an anxious mind doesn't research well, exchange with others really helps, but my anxiety traps me to stay in that bad state!
I do something easy to commit myself to something that seems scary but important for research progress, so in the moment of self-consciousness I can't just back out again. Examples:
Social accountability is great for this. When I'm in a really self-conscious-can't-work-state, I sometimes commit myself to send someone something I haven't started, yet, in 30 minutes, no matter what state it's in.
I also often find it way easier to say "yes, I'll do this talk/discussion round at date X" or messaging another person "Hey, I have this idea I wanted to discuss, can I send you a doc?" (Even though I don't have a good doc, yet, because I think the idea is crap) than to do the thing, so whenever I feel able to do the first, I do it and future Chi has to deal with it, no matter how self-conscious she is.
Often, just starting is the hardest thing. At least for me, that's where feeling super self-conscious often happens and stops me from doing anything. I sometimes set a timer for 5 minutes to do work. That's short enough that it feels ridiculous not to be able to it, and afterwards I often feel way less self-conscious and can just continue.
For self-consciousness reasons, I struggle with saying "Yes, I think this is good and promising" about something I work on, which makes me useless at analyzing whether e.g. a cause area is promising, which is incidentally exactly my task right now. So I looked for things that felt similar and uncomfortable in the same way and settled for trying to post at least one opinion/idea a weekday in pre-specified channels. (I had to give up after a week, but I think it was really good and I want to continue once I have more breathing room.)
For the same reason as above, I deliberately go through my messages and delete all anxious qualifiers. I can't always do that in all contexts because they make me too self-conscious, and I allow myself that.
I appreciate that the above self-exposure-therapy examples might be too difficult for some and that might seem intimidating. (I've definitely been at "I'd never, ever, ever write a comment on the forum!" I'm still self-conscious about what I up- and downvote and noone can even see that) But you can also make progress on a lower level, just try whatever seems manageable and be gentle to yourself. (And back off if you notice you chewed off too much.)
However, it can still be pretty daunting and it might be that it's not always possible to do the above completely independently. (E.g. I think I only got started when I spent several weeks at a research organisation I respect a lot, felt terrible for many parts, but couldn't back out and just had to do or die, and had a really good environment. I'm not sure "sticking through" would have been possible for me without that organisational context)
I personally benefited a lot from listening to other people's stories, general content on failing, self-esteem etc. I'm not sure how applicable that is to others that try to improve research self-consciousness because I never looked at it from a pure research lens, but it's motivating to have a positive ideal as well, and not just "self-consciousness is bad." I usually consume non-EA podcasts and books for this.
On positive motivation:
Related to the last point of positive ideals: Recently, I found it really inspiring to look at some people who just seem to have no fear of expressing their ideas, enthusiasm, think things through themselves without anyone giving them permission etc. And I think about how valuable just these traits are apart from the whole "oh, they are so smart" thing. I find that a lot more motivating than the smartness-ideal in EA, and then I get really motivated to also become cool like that!
I guess for me there's also a gender thing in where the idea of becoming a kickass woman is double motivating. I think I also have the feeling that I want to make progress on this on behalf of other self-conscious people that struggle more than me. I'm not really sure why I think that benefits them, but I just somehow do. (Maybe I could investigate that intuition at some point.) And that also gives me some positive motivation.
Ironically, I felt somewhat upset reading OP, I think for the reason you point out. (No criticism towards OP, I was actually amused at myself when I noticed)
I think some reason-specific heterogeneity in how easily something is expressible/norms in your society also play a role:
I guess the common thread here is feeling threatened and like one needs to defend one's opinion because it's likely to be undermined. I guess the remedy would be... Really making sure the other person feels taken seriously (including by themselves) and safe and says everything they want? (Maybe someone else can come up with something more helpful and concrete) That's obviously just the side of the non-offended person, but I feel like the ways the upset person could try to improve in such situations is even more generic and vague.
Obviously, this is just one type of being emotional during conversations. E.g if what I say explains any meaningful variance at all, it probably does so less for 4) than for 3). (Maybe not coincidentally since I'm not male)
Thanks for the reply!
Honestly, I'm confused by the relation to gender. I'm bracketing out genders that are both not-purely-female and not-purely-male because I don't know enough about the patterns of qualifiers there.
Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don't know if that says much about EA/society, or solely about me. (I felt the need to include "honestly" to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)
"displaying uncertainty or lack of knowledge sometimes helps me be more relaxed"
I think there's a good version of that experience and I think that's what you're referring to, and I agree that's a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)
I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That's why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*
Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it's unrealistic to get rid of all of it.
I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I'm worried that it's too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal "how scary are things" ranking, and I would expect that that's the case for most people.)
OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn't draw the distinction quite where you drew it. E.g. "Acknowledgements that arguments changed your mind" are also about communication norms.)
I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at "wouldn't even dare to write a forum comment because that's scary."), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I'm not sure if that's the case for other norms that could have that effect. (See also my reply to Emery )
Got it now, thanks! I agree there's confident and uncertain, and it's an important point.
I'll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there's quite a bit for someone to unpack here conceptually. I haven't done so, but here a start:
I think you're mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4/5 also have their functions and shouldn't be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the "this is just a habit" kind of 3.
I think 1 and 2 look quite different from 4 and 5. The main problem that it's hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said "I could be wrong", which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.
edit: No idea, I really love 3
I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that's mostly because they seem to be harmful for underconfident people to hear.
Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.
Way in which this could be true but unclear in sign: people tend to post advice that would be helpful to themselves, and they are more appropriately or overconfident people in the community than underconfident ones.
Way in which this could be true but appropriate: advice that would be harmful when overconfident people internalize it tends to be more harmful than advice that's harmful to underconfident people. Hence, people post proportionally less of the first.
(I don't think the vast space of possible advice just has more advice that's harmful for underconfident people to hear than advice that's harmful for overconfident people to hear.)
Maybe memes/norms that might be helpful for underconfident for people to hear or their properties that could be harmful for underconfident people are also just more salient to me.
Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there's an important difference. The point I was trying to make is that one can look like the other, and that I'm worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look similar to epistemic uncertainty, they see other people in their community show similar-ish behavior and see similar-ish behavior be rewarded.
Not sure how to address this without harming epistemic uncertainty though. (although I'm inclined to think the right trade-off point involves more risk of less of the good communication of epistemic uncertainty)
Or was your point that you disagree that they look superficially similar? And hence, one wouldn't encourage the other? And if that's indeed your point, would you independently agree or disagree that there's a lot of anxious social signaling of uncertainty in effective altruism?