And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
I agree that that's how I want the eventual decision to be made. I'm not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian's or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of 'X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I'll defer to them', which I think EAs generally say/think/do too often. It's very easy to miss things even when you've worked on something for a while (esp. if it's more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people's reactions are explicitly part of what you're optimizing for. (Obviously what we care about are new-people's reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there's some risk of the opposite ('not expecting enough of professionals?'), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it's the opposite with experts outside of EA).
Meta: Rereading your comment, I think it's more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it's good to leave thoughts on possible interpretations of what people write.
In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)
I found tagging buggy. I tried to tag something yesterday, and I believe it didn't get through although it worked today. The 'S-risks" tag doesn't show up in my list to tag posts at all, although it's an article. But that might also be something about the difference between tags and articles that I don't understand? I use firefox and didn't check on other browsers.
Is there a consensus for how to use organisation tags? Specifically, is it desirable to have have every output that's ever come out of an organisation to be tagged to them or only e.g. organisational updates? I've seen the first partly, but scarcely, done and am not sure about my opinion. (I mean things like "This report is published on the EA forum and the person who worked on this report was at org X at the time and wrote it as part of their job")
edit: 3) Just adding this on here...Is there a way to tag everything that has one tag with another tag? (I'm speaking of the 'economics' tag + lots of more specific tags; 'moral philosophy' and 'metaethics' etc.)
I'm not a very experienced researcher, but I think in my short research career, I've had my fair share of dealing with self-consciousness. Here are some things I find:
Note that I mostly refer to the "I'm not worth other people's time", "This/I am dumb", "This is bad, others will hate me for it" type of self-consciousness. There might be other types of self-consciousness, e.g. "I'm nervous I'm not doing the optimal thing and feel bad because then more morally horrible things will happen" in a way that's genuinely not related to self-confidence, self-esteem etc. for which my experience will not apply. This is apart from the obvious fact that different things work for different people
Some general thoughts:
Things I do to improve:
In the moment of feeling self-conscious: I would second Jason that talking to others about the object-level is magic
I also have a rule of talking to others about my research/sharing writeups whenever it feels most uncomfortable to do so. Those are often the moments when I'm most hardstuck because an anxious mind doesn't research well, exchange with others really helps, but my anxiety traps me to stay in that bad state!
I do something easy to commit myself to something that seems scary but important for research progress, so in the moment of self-consciousness I can't just back out again. Examples:
Social accountability is great for this. When I'm in a really self-conscious-can't-work-state, I sometimes commit myself to send someone something I haven't started, yet, in 30 minutes, no matter what state it's in.
I also often find it way easier to say "yes, I'll do this talk/discussion round at date X" or messaging another person "Hey, I have this idea I wanted to discuss, can I send you a doc?" (Even though I don't have a good doc, yet, because I think the idea is crap) than to do the thing, so whenever I feel able to do the first, I do it and future Chi has to deal with it, no matter how self-conscious she is.
Often, just starting is the hardest thing. At least for me, that's where feeling super self-conscious often happens and stops me from doing anything. I sometimes set a timer for 5 minutes to do work. That's short enough that it feels ridiculous not to be able to it, and afterwards I often feel way less self-conscious and can just continue.
For self-consciousness reasons, I struggle with saying "Yes, I think this is good and promising" about something I work on, which makes me useless at analyzing whether e.g. a cause area is promising, which is incidentally exactly my task right now. So I looked for things that felt similar and uncomfortable in the same way and settled for trying to post at least one opinion/idea a weekday in pre-specified channels. (I had to give up after a week, but I think it was really good and I want to continue once I have more breathing room.)
For the same reason as above, I deliberately go through my messages and delete all anxious qualifiers. I can't always do that in all contexts because they make me too self-conscious, and I allow myself that.
I appreciate that the above self-exposure-therapy examples might be too difficult for some and that might seem intimidating. (I've definitely been at "I'd never, ever, ever write a comment on the forum!" I'm still self-conscious about what I up- and downvote and noone can even see that) But you can also make progress on a lower level, just try whatever seems manageable and be gentle to yourself. (And back off if you notice you chewed off too much.) However, it can still be pretty daunting and it might be that it's not always possible to do the above completely independently. (E.g. I think I only got started when I spent several weeks at a research organisation I respect a lot, felt terrible for many parts, but couldn't back out and just had to do or die, and had a really good environment. I'm not sure "sticking through" would have been possible for me without that organisational context)
I personally benefited a lot from listening to other people's stories, general content on failing, self-esteem etc. I'm not sure how applicable that is to others that try to improve research self-consciousness because I never looked at it from a pure research lens, but it's motivating to have a positive ideal as well, and not just "self-consciousness is bad." I usually consume non-EA podcasts and books for this.
On positive motivation:
Related to the last point of positive ideals: Recently, I found it really inspiring to look at some people who just seem to have no fear of expressing their ideas, enthusiasm, think things through themselves without anyone giving them permission etc. And I think about how valuable just these traits are apart from the whole "oh, they are so smart" thing. I find that a lot more motivating than the smartness-ideal in EA, and then I get really motivated to also become cool like that!
I guess for me there's also a gender thing in where the idea of becoming a kickass woman is double motivating. I think I also have the feeling that I want to make progress on this on behalf of other self-conscious people that struggle more than me. I'm not really sure why I think that benefits them, but I just somehow do. (Maybe I could investigate that intuition at some point.) And that also gives me some positive motivation.
Ironically, I felt somewhat upset reading OP, I think for the reason you point out. (No criticism towards OP, I was actually amused at myself when I noticed)
I think some reason-specific heterogeneity in how easily something is expressible/norms in your society also play a role:
I guess the common thread here is feeling threatened and like one needs to defend one's opinion because it's likely to be undermined. I guess the remedy would be... Really making sure the other person feels taken seriously (including by themselves) and safe and says everything they want? (Maybe someone else can come up with something more helpful and concrete) That's obviously just the side of the non-offended person, but I feel like the ways the upset person could try to improve in such situations is even more generic and vague.
Obviously, this is just one type of being emotional during conversations. E.g if what I say explains any meaningful variance at all, it probably does so less for 4) than for 3). (Maybe not coincidentally since I'm not male)
Thanks for the reply!
Honestly, I'm confused by the relation to gender. I'm bracketing out genders that are both not-purely-female and not-purely-male because I don't know enough about the patterns of qualifiers there.
Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don't know if that says much about EA/society, or solely about me. (I felt the need to include "honestly" to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)
Reply 3/3
"displaying uncertainty or lack of knowledge sometimes helps me be more relaxed"
I think there's a good version of that experience and I think that's what you're referring to, and I agree that's a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)
I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That's why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*
Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it's unrealistic to get rid of all of it.
Reply 2/3
I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I'm worried that it's too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal "how scary are things" ranking, and I would expect that that's the case for most people.) OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn't draw the distinction quite where you drew it. E.g. "Acknowledgements that arguments changed your mind" are also about communication norms.) I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at "wouldn't even dare to write a forum comment because that's scary."), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I'm not sure if that's the case for other norms that could have that effect. (See also my reply to Emery )
Reply 1/3 Got it now, thanks! I agree there's confident and uncertain, and it's an important point. I'll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there's quite a bit for someone to unpack here conceptually. I haven't done so, but here a start:
I think you're mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4/5 also have their functions and shouldn't be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the "this is just a habit" kind of 3. I think 1 and 2 look quite different from 4 and 5. The main problem that it's hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said "I could be wrong", which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.
edit: No idea, I really love 3
FWIW, depending on the definition of 'very concerning', I wouldn't find this surprising. I think people often read things, vaguely update, know that there's another side of the story that they don't know, have the thing they read become a lot less salient, happen to not see the follow-up because they don't check the forum much, and end up having an updated opinion (e.g. about ACE in this case) much later without really remembering why.
(e.g. I find myself very often saying things like "oh, there was this EA post that vaguely said X and maybe you should be concerned about Y because of this, although I don't know how exactly this ended in the end" when others talk about some X-or-Y-related topic, esp. when the post is a bit older. My model of others is that they then don't go check, but some of them go on to say "Oh, I think there's a post that vaguely says X, and maybe you be concerned about Y because of this, but I didn't read it, so don't take me too seriously" etc. and this post sounds like something this could happen with.)
Maybe I'm just particularly epistemically unvirtuous and underestimate others. Maybe for the people who don't end up looking it up but just having this knowingly-shifty-somewhat-update the information just isn't very decision-relevant and it doesn't matter much. But I generally think information that I got with lots of epistemic disclaimers and that have lots of disclaimers in my head do influence me quite a bit and writing this makes me think I should just stop saying dubious things.