Just a point on inclusiveness: throughout this post, you implicitly assume that the average effective altruist is a heterosexual man-- the sort of person who would find a girlfriend at EA Global, has Will MacAskill as his competition, and who might tell cute girls about the drowning child thought experiment. That kind of thing tends to be really alienating to women and LGBT+ people reading! It's the same way you would feel kind of alienated if you read a post assuming that you are a woman and you'd be getting a boyfriend at EA Global. One easy way you can make posts like this more inclusive is by gender-swapping things: for example, you might keep your drowning child example, but say "social skills to find a boyfriend at EA Global." (Will MacAskill should probably be kept as it is, because for better or for worse calling prominent men dreamy is much less socially laden than calling prominent women dreamy.)
Thank you! You're right. That's absolutely a flaw. In the future, when I write things like this, I'll try to be more careful about highlighting that both I and my conservative friends are American and I can't speak to other countries.
Hiring someone to watch my kid instead of trying to work during naps and in the evenings.
Getting pregnant may cause insomnia both while you're pregnant and postpartum (even if someone else is taking care of the baby or you've sleep-trained the baby).
At all times, I have a set of topics to think about during downtime, such as showers and walks. (I try to include several different topics, including at least one piece of fiction I'm writing.) If I can't sleep, I lie still in bed and think about one of my topics. I find I get a lot of creative insight, I avoid anxious ruminating, and I often drift off back to sleep.
Don't drink caffeine late in the afternoon, and if you use stims or other insomnia-causing medication try to take them as early as possible.
I do not intend Near-Term EAs to be participants' only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.
Near-Term EAs has been in open beta for a week or two while I ironed out the kinks. So far, I have not found any issues with people being unusually closed-minded or intolerant of far-future EAs. In fact, we have several participants who identify as cause-agnostic and at least one who works for a far-future organization.
The EA community climate survey linked in the EA survey has some methodological problems. When academics study sexual harassment and assault, it's generally agreed upon that one should describe specific acts (e.g. "has anyone ever made you have vaginal, oral, or anal sex against your will using force or a threat of force?") rather than vague terms like harassment or assault. People typically disagree on what harassment and assault mean, and many people choose not to conceptualize their experiences as harassment or assault. (This is particularly true for men, since many people believe that men by definition can't be victims of sexual harassment or assault.) Similarly, few people will admit to perpetrating harassment or assault, but more people will admit to (for example) touching someone on the breasts, buttocks, or genitals against their will.
I'd also suggest using a content warning before asking people about potentially traumatic experiences.
If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.
I'd be interested in an elaboration on why you reject expected value calculations.
My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if-- as is true much of the time in career choice-- I can look at data on the average person's chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal's muggings.
Can you explain why (and whether) you disagree?
IIRC, Open Phil often wants to not be a charity's only funder, which means they leave the charity with a funding gap that could maybe be filled by the EA Fund.
Well, yes, anyone can come up with all sorts of policy ideas. If a person has policy expertise in a particular field, it allows them to sort out good policies from bad ones, because they are more aware of possible negative side effects and unintended consequences than an uninformed person is. I don't think the fact that a person endorses a particular policy means that they haven't thought about other policies.
Is your claim that Chloe Cockburn has failed to consider policy ideas associated with the right-wing, and thus has not done her due diligence to know that what she recommends is actually the best course? If so, what is your evidence for this claim?