I haven't seen other resources that talk about the cost of college this way, but I also don't spend much time looking at financial planning advice?
The approach in this post is only relevant to a pretty small fraction of people:
I think this is likely enough that a 529 plan or similar does not make sense for our family, but I'm planning to revisit when my kids are getting close to high school (and I have a better sense of their academic standing) before considering a career change.
The Plough link is broken; it should be https://www.plough.com/en/topics/life/parenting/the-case-for-one-more-child
I don't think this is actually a reasonable request to make here?
What do you think the "for life" adds to the pledge if not "for the rest of your lives"?
See the discussion here: https://www.facebook.com/jefftk/posts/10100184609772372?comment_id=10100184674817022
It doesn't account for a very much of the data, unfortunately.
"for life" sounds just as permanent to me, if less morbid, than "till death do us part"
Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.
This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.
It reminds me of Sarah Constantin's post about the trade-off between output and external direction: https://srconstantin.wordpress.com/2019/07/20/the-costs-of-reliability/
For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!
Again, I'm not sure here at all, but I don't think it's obvious you're right.
"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality." https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?