Jeff_Kaufman

Comments

College and Earning to Give

I haven't seen other resources that talk about the cost of college this way, but I also don't spend much time looking at financial planning advice?

The approach in this post is only relevant to a pretty small fraction of people:

  • Your children need to be likely enough to be admitted to the kind of institution that commits to meeting 100% of demonstrated financial need, or otherwise has a similar  "100% effective tax rate" that it's worth considering.
  • You need to not be very interested in saving money for your own future use.  The CSS Profile suggesting 5%/y for parental assets means that with three kids at 4y each you might be asked for 60% of assets.  (Note that the CSS profile does ask about parental retirement accounts, and some schools do consider those assets).
  • Your earnings need to be low enough just before and during college, either because your career has never been highly lucrative or because you are willing to change your line of work for that time period.

I think this is likely enough that a 529 plan or similar does not make sense for our family, but I'm planning to revisit when my kids are getting close to high school (and I have a better sense of their academic standing) before considering a career change.

Why do so few EAs and Rationalists have children?

The Plough link is broken; it should be https://www.plough.com/en/topics/life/parenting/the-case-for-one-more-child

Some preliminaries and a claim

I don't think this is actually a reasonable request to make here?

EA Relationship Status

What do you think the "for life" adds to the pledge if not "for the rest of your lives"?

EA Relationship Status

See the discussion here: https://www.facebook.com/jefftk/posts/10100184609772372?comment_id=10100184674817022

It doesn't account for a very much of the data, unfortunately.

EA Relationship Status

"for life" sounds just as permanent to me, if less morbid, than "till death do us part"

Some thoughts on deference and inside-view models

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

Some thoughts on deference and inside-view models
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.

It reminds me of Sarah Constantin's post about the trade-off between output and external direction: https://srconstantin.wordpress.com/2019/07/20/the-costs-of-reliability/

For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!

Again, I'm not sure here at all, but I don't think it's obvious you're right.

How should longtermists think about eating meat?

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality." https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

How should longtermists think about eating meat?

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

Load More