"To see the world as it actually is, rather than as I wish it to be."
People may or may not also be interested in my comments on Metaculus and Twitter:
Information source: Not sure if this is the right reference class, but it's interesting to note that the most famous historical utilitarians seemed to have married late.
1. Jeremy Bentham was never married, and AFAIK has never had a romantic relationship.
2. John Stuart Mill married Harriet Mill (also a utilitarian) when he was 45. She was 44. If his autobiography is to be believed, she was his first and only serious romantic interest.
3. Henry Sidgwich married at 38. (though some biographers think he was privately gay).
4. Bertrand Russell seemed to be a bit of an outlier, marrying at 22, 49, 64 and 80.
5. Derek Parfit married at 67.
6. Peter Singer (also an outlier) married at 22.
The earlier examples are especially interesting, because I'd expect the average age of marriage to be much lower, historically. Of course, they might also generalize less well.
This is interesting. The numbers here are not surprising based on my independent observations, but the phenomenon is in some sense fairly surprising. Several other considerations:
1. Anecdotally, conditional upon marriage, the rate of divorce in my EA friends seem much higher than among my non-EA friends of similar ages. So it is not the case that EAs are careful/slow to marry because they are less willing with making long-term commitments, or because they are more okay with pre-marital cohabitation.
Obviously in any given case this should not be a cause of blame (in all the situations I have sufficient detail about, it appears that divorce was the best option in each of those cases). However, collectively the pattern should require some explanation.
2. Along with some of the other commenters, I share the anecdotes that my EA friends are much less likely to be married than my non-EA friends, or other groups. To add to the list of anecdotes, among Googlers who a) I know from non-EA contexts, eg former coworkers, b) are older than me and c) I know well enough to be >80% of confident of their relationship statuses, I think > 50% of them are married. I think the numbers are closer to 25-30% for Googlers of a similar age range I know through EA (with some nuances, like I know one person who probably would have been married if not for polyamory), and similar (if not slightly lower) numbers for non-Googler EAs I know well.
3. My inside view is that if you don't update on the observed data and just consider which characteristics will make EAs more or less likely to be married, I think there are a bunch of factors that push EAs towards "more"as opposed to less. Possibly controversial, but consider:
A. EAs are, on average, disproportionately high in traits that are seen as positive for long-term relationships/marriages in the broader population. This includes obvious traits like elite college attendance (speaking as someone who has not attended one), high earning potential, and intellectual engagement, but also subtler traits like having good relationships with their parents (which should be an indicator for being on average better at long-term relationships), general willingness to make sacrifices, communication ability, and willingness to try different things for conflict resolution.
B. You might expect this to be a signaling problem (maybe EAs have positive traits that are hard for others to discover), but I think the meta-level evidence is against this? For example, elite college backgrounds and intellectual ability are relatively transparent. You might also expect EAs to on average be healthier and more conventionally attractive than baseline (For example, Americans aged 20-39 are ~40% likely to be obese for both men and women, and I think the numbers are much lower in EA).
C. EAs are much more likely to be in international relationships than baseline, and the relative legal benefit of marriage is usually higher for international marriages than domestic marriages.
I moderately think this is the wrong approach on the meta-level.
1. We observe a phenomenon where X demographic is less likely to exhibit Y characteristic.
2. You're coming up with a list of explanations (E1, E2, E3) to explain why X is less likely to have Y, and then stopping when the variance is sufficiently explained.
3. However this ignores that there might be reasons for why your prior should be does X is more likely to have Y.
And on the object level, I agree with the other commentators that EAs often draw from groups that are less, rather than more, likely to be single.
I think it's possible that last year was just unusually slow for people (possibly pandemic-related?)
I looked at 3B1B (the only Youtube explainer series I'm familiar with) and since 2015 Grant has produced ~100 high quality videos, which is closer to ~20 videos/year than ~10/year.
I'm not familiar with the others.
and could plausibly be ~20% more productive in a year in terms of the main, highly-produced videos
I feel like this is low-balling potential year-to-year variation in productivity. My inside view is that 50-100% increases in productivity is plausible.
To be clear, I think your overall comment added to the discussion more than it detracts, and I really appreciate you making it. I definitely did not interpret your claims as an attack, nor did I think it's a particularly egregious example of a bravery framing. One reason I chose to comment here is because I interpreted (correctly, it appears!) you as someone who'd be receptive to such feedback, whereas if somebody started a bravery debate with a clearer "me against the immoral idiots in EA" framing I'd probably be much more inclined to just ignore and move on.
It's possible my bar for criticism is too low. In particular, I don't think I've fully modeled meta-level considerations like:
1) That by only choosing to criticize mild rather than egregious cases, I'm creating bad incentives.
2) You appear to be a new commenter, and by criticizing newcomers to the EA Forum I risk making the EA Forum less appealing.
3) That my comment may spawn a long discussion.
Nonetheless I think I mostly stand by my original comment.
Yeah that makes a lot of sense. I think the rest of your comment is fine without that initial disclaimer, especially with your caveat in the last sentence! :)
Meta: Small nitpick, but I would prefer if we reduce framings like
This is going to sound controversial here (people are probably going to dislike this but I'm genuinely raising this as a concern)
See Scott Alexander on Against Bravery Debates.
I also notice myself being confused about the output here. I suspect the difficulty of being good at Youtube outreach while fully understanding technical AI safety concepts is a higher bar than you're claiming, but I also intuitively would be surprised if it takes an average of 2+ months to produce a video (though perhaps he spends a lot of time on other activities?
for example, he’s already helping existing organizations produce videos about their ideas
alludes to this.
I think the set of values commonly ascribed to EA is both more totalizing and a stronger attractor state than most counterfactuals.
Right, I think the argument as written may not hold for the UK (and other locations with very low prevalence but R ~=1). My intuitions, especially in recent months, have mostly been formed from a US context (specifically California), where R has never been that far away from 1 (and current infectious prevalence closer to 0.5%).
That said, here are a bunch of reasons to argue against "Alice, an EA reading this forum post, being infected in London means Alice is responsible for 30 expected covid-19 infections (and corresponding deaths at 2020/08 levels)."
(For simplicity, this comment assumes an Rt ~= 1, a serial interval of ~one week, and a timeframe of consideration of 6 months)
1. Notably, an average Rt~=1 means that the median/mode is very likely 0. So there's a high chance that any given chain will either terminate before Alice infects anybody else, or soon afterwards. Of course, as EAs with aggregatively ethics, we probably care more about the expectation than the medians, so the case has to be made that we're less likely on average to infect others. Which brings us to...
2. Most EAs taking some precautions are going to be less likely to be infected than average, so their expected Rt is likely <1. See Owen's comment and responses. Concretely, if you have a 1% annualized covid budget for a year (10,000 microcovids), which I think is a bit on the high side for London, then you're roughly exposing yourself to 200 microcovids a week. At a baseline population prevalence of 500 microcovids, this means you have a ~40% chance of getting covid-19 in a week conditional upon your contacts having it, which (assuming a squared term) means P(Alice infects others | Alice is infected) is also ~40%.
Notably a lot of your risk comes from model uncertainty, as I mentioned in my comment to Owen, so the real expected Rt(Alice) > 0.4
As I write this out, under those circumstances I think a weekly budget of 200 microcovids a week is possibly too high for Alice.
However, given that I live in Berkeley, I strongly suspect that E(Number of additional people infected, other than Linch | Linch being infected) is < 1. (especially if you ignore housemates).
3. If your contacts are also cautious-ish people, many of them who are EAs and/or have read this post, they are likely to also take more precautions than average, so P(Alice's child nodes infecting others | Alice's child nodes being infected) is also lower than baseline.
4. There's also the classist aspect here, where most EAs work desk jobs and aren't obligated to expose themselves to lots of risks like being essential workers.
5. Morally, this will involve a bunch of double-counting. Eg, if you imagine a graph where Alice infects one person, her child node infects another person etc, for the next 6 months, you have to argue that Alice is responsible for 30 infections, her child node is responsible for 29, etc. Both fully counterfactual credit assignment and proposed alternatives have some problems in general, but in this covid-y specific case I don't think having an aggregate responsibility of 465 infections when only 30 people will be infected will make a lot of sense. (Sam made a similar point here, which I critiqued because I think there should be some time dependence, but I don't think time dependence should be total).
6. Empirical IFR rates have gone down, and are likely to continue doing so as a) medical treatment improves, b)people make mostly reasonable decisions with their lives (self-select on risk levels) plus c) reasonable probability of viral doses going down due to mask usages and the like.
7. As a related point to #3 and #6, I'd expect Alice's child nodes to be not just more cautious but also healthier than baseline (they are not randomly drawn from the broader population!).
8. There's suggestive evidence of substantial behavioral modulation (which is a large factor keeping Rt ~=1). If true, this means any marginal infection (or lack thereof) has less than expected effect as other people adjust behavior to take less or more risks.
Counterarguments, to argue that E(# of people infected| Alice is infected)>>30:
1. Maybe there's a nontrivial number of worlds where London infections spike again. In those worlds, assuming a stable Rt~=1 is undercounting. (and at 0.05% prevalence, a lot of E(#s infected) is dominated by the tails).
2. Maybe 6 months is too short of an expected bound for getting the pandemic under control in London (again tail heavy).
3. Reinfections might mess up these numbers.
In London, 5-10% have been infected
Where are you getting this range? All the estimates I've seen for London are >10%, eg this home study and this convenience sample of blood donors.