Thanks; this was helpful. Did the same thing and sent the post to my family, too.
Agree. It seems potentially pretty damaging to people’s reputations to make this information public (and attached to their names); that strikes me as a much bigger penalty than the bans. There should, at a minimum, be a consistent standard, and I’m inclined to think that standard should be having a high bar for releasing identifying information.
Thanks for writing this up and for doing the fellowship! Would you mind saying a bit more about how participants' career plans changed as a result of doing the fellowship (if you know) and/or how you plan to monitor their plans going forward?
I have emailed her and will update this comment when she gets back! I think there was an ~8-page questionnaire that evolved over time (since there were probably about 12 nannies/au pairs, and lessons were learned along the way) and a Skype interview, though.
Thanks so much for these recommendations! They’re really helpful, and I’m likely to donate to one of the recommended organizations this giving season.
I do have a question: which of the recommended organizations have close ties to EA? I realize that “close ties” is a vibe-y concept, but things like “incubated by CE,” “director has been involved in EA since 2015,” or “received most of their funding from EA funders prior to being recommended by ACE” would count. (I’d be eager to hear others’ input on how I’m cashing “close ties” out.)
The reason I ask is not because being closely tied to EA is a bad thing; clearly, if someone is an EA, and starts an impactful charity based on ITN reasoning etc etc this is not an argument against funding them. That said, I do think EA is rife with conflicts of interest, and that (1) this does presumably have an effect on who receives grants/support/endorsements, so I’d likely subject these organizations to closer scrutiny before donating and (2) in general, I think we should strive to be as transparent as possible about this stuff.
Interesting! I'd like to see an analysis of things correlated with most/all of the children in one family turning out well, because I'd be more inclined to emulate the parenting style of parents where (1) all of their kids became happy, reasonably successful, well-adjusted adults than ones where (2) one kid became a superstar.
Combination of full-time nannies (who probably worked 40 hrs/week and didn’t live with us) before we were school-aged and live-in au pairs when we were school-aged (who probably worked 6:30-9am and 4-8pm on weekdays, and maybe one full day/weekend).
Thanks for writing this—super helpful. Just one anecdote on the childcare front: my siblings and I had full-time nannies/au pairs from when we were babies until we could drive, because our parents worked full-time and often traveled. (My mom had an intensive screening process for said nannies/au pairs, and chose excellent ones.) I view this as having been a really good thing for my development—I became less shy, valued my time with my parents more, learned about other parts of the world/cultures/ways of life, was mentored by women in their 20s as a pre-teen/teenager, and developed close relationships with some amazing people. I think parents sometimes view hiring external childcare as a necessary evil, but for me (and, I think, my siblings) it was a really positive aspect of our childhoods.
I do think the portrayal of EAs could be worse, but it's pretty bad? EAs are accused of being hypocritical (e.g., way more concerned with money than they would care to admit), culty, overly trusting, overconfident, and generally uncool.
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a "very serious issue," and lacking "any perspective at all."
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.