So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.
I also suspect that if we take a detached attitude towards this sort...
An example of a particular practice that I think might look kind of innocuous but can be quite harmful to women and minorities in EA is what I'm going to call "buzz talk". Buzz talk involves making highly subjective assessments of people's abilities, putting a lot of weight in those assessments, and communicating them to others in the community. Buzz talk can be very powerful, but the beneficiaries of buzz seem to disproportionately be those that conform to a stereotype of brilliance: a white, upper class male might be "the next big thing&q...
I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.
My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.
My advice would be:
When assessing someone's talent, focus on the content of what they're saying/writing, not the general fe
There are two different claims here: one is "type x research is not very useful" and the other is "we should be doing more type y research at the margin". In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don't think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even tho...
I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another examp...
On individual advice: I'd add something about remembering that you are always in charge and should set your own boundaries. You choose what you want to do with your life, how much of EA you accept, and how much you want to use to influence your choices. If you're a professional acrobat and want to give 10% of your income to effective charities, that's a great way to be an EA. If someone points out that you also have a degree in computer science and could go work on AI safety, it's fine to reply "I know but I don't want to do that". You don't need to ... (read more)
I don't buy this. Perhaps I don't understand what you mean.
To press, imagine we're at the fabled Shallow Pond. You see a child drowning. You can easily save them at minimal cost. However, you don't. I point out you could easily have done so. You say "I know but I don't want to do that". I wouldn't consider that a satisfactory response.
If you then said "I don't need to justify my choices on effective altruist grounds" I might just look at your blankly for a moment and say "wait, what has 'effective altruism' got to do with it? What about, um, just basic eth... (read more)