ozymandias

ozymandias's Comments

How Effective Altruists Can Be Welcoming To Conservatives

Thank you! You're right. That's absolutely a flaw. In the future, when I write things like this, I'll try to be more careful about highlighting that both I and my conservative friends are American and I can't speak to other countries.

Burnout: What is it and how to Treat it.

Hiring someone to watch my kid instead of trying to work during naps and in the evenings.

Burnout: What is it and how to Treat it.

Getting pregnant may cause insomnia both while you're pregnant and postpartum (even if someone else is taking care of the baby or you've sleep-trained the baby).

At all times, I have a set of topics to think about during downtime, such as showers and walks. (I try to include several different topics, including at least one piece of fiction I'm writing.) If I can't sleep, I lie still in bed and think about one of my topics. I find I get a lot of creative insight, I avoid anxious ruminating, and I often drift off back to sleep.

Don't drink caffeine late in the afternoon, and if you use stims or other insomnia-causing medication try to take them as early as possible.

Near-Term Effective Altruism Discord

I do not intend Near-Term EAs to be participants' only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.

Near-Term EAs has been in open beta for a week or two while I ironed out the kinks. So far, I have not found any issues with people being unusually closed-minded or intolerant of far-future EAs. In fact, we have several participants who identify as cause-agnostic and at least one who works for a far-future organization.

Please Take the 2018 Effective Altruism Survey!

The EA community climate survey linked in the EA survey has some methodological problems. When academics study sexual harassment and assault, it's generally agreed upon that one should describe specific acts (e.g. "has anyone ever made you have vaginal, oral, or anal sex against your will using force or a threat of force?") rather than vague terms like harassment or assault. People typically disagree on what harassment and assault mean, and many people choose not to conceptualize their experiences as harassment or assault. (This is particularly true for men, since many people believe that men by definition can't be victims of sexual harassment or assault.) Similarly, few people will admit to perpetrating harassment or assault, but more people will admit to (for example) touching someone on the breasts, buttocks, or genitals against their will.

I'd also suggest using a content warning before asking people about potentially traumatic experiences.

Fact checking comparison between trachoma surgeries and guide dogs

If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.

Why I left EA

I'd be interested in an elaboration on why you reject expected value calculations.

My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if-- as is true much of the time in career choice-- I can look at data on the average person's chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal's muggings.

Can you explain why (and whether) you disagree?

Introducing the EA Funds

IIRC, Open Phil often wants to not be a charity's only funder, which means they leave the charity with a funding gap that could maybe be filled by the EA Fund.

80,000 Hours: EA and Highly Political Causes

Well, yes, anyone can come up with all sorts of policy ideas. If a person has policy expertise in a particular field, it allows them to sort out good policies from bad ones, because they are more aware of possible negative side effects and unintended consequences than an uninformed person is. I don't think the fact that a person endorses a particular policy means that they haven't thought about other policies.

Is your claim that Chloe Cockburn has failed to consider policy ideas associated with the right-wing, and thus has not done her due diligence to know that what she recommends is actually the best course? If so, what is your evidence for this claim?

80,000 Hours: EA and Highly Political Causes

I don't think it would be wise to try and specify and defend that abstract claim in the same post as talking about a specific situation. I take it as given, at least here. Perhaps I will do a followup, but I think it would be hard to do the topic justice in, say, 5-10 hours which is what I realistically have.

I am confused. If you took it as given, why bother talking about whether Alliance for Safety and Justice and Cosecha are good charities? It surely doesn't matter if someone is good at doing something that you think they shouldn't be doing in the first place. Perhaps you intended to say that you mean to discuss the object-level issue of whether these charities are good and leave aside the meta-level issue of whether EA should be involved in politics, in which case I am puzzled about why you brought up the meta-level issue in your post.

Animal welfare activism is controversial, but it hasn't been subsumed into the culture war in the way immigration, race and social justice have. Some parts of animal welfare activism, such as veganism are left-associated, but other parts like wild animal suffering and synthetic meat most certainly are not. So in my mind, animal welfare activism is suitable for EA involvement.

I disagree that animal welfare activism hasn't been subsumed into the culture war. For instance, veganism is a much more central trait of the prototypical hippie than immigration opinions are. PETA is significantly more controversial than any equally prominent immigration charity.

I think that wild-animal suffering and synthetic meat are mostly not part of the culture war because they are obscure. I expect that they would become culture-war issues as soon as they become more prominent. Do you disagree? Or do you think that the appropriate role of EA is to elevate issues into culture-war prominence and then step aside? Or something else?

AI-risk as offputting is becoming less true over time, but EA should not be aiming to appeal to everyone. Rather I think that EA should be aiming to not take sides in tribal wars.

Do you mean that EA shouldn't take sides in e.g. deworming, because that's a tribal war between economists and epidemiologists? Or do you mean that they shouldn't take sides in issues associated with the American left and right, even if they sincerely believe that one of those issues is the best way to improve the world? Or something else?

Load More