Denise_Melchin

Topic Contributions

Comments

Tentative Reasons You Might Be Underrating Having Kids

Ah, when you said 'significant amount' I assumed you meant a lot more. 10% of the total does not seem like much to me.

Tentative Reasons You Might Be Underrating Having Kids

Sorry, I didn't want to imply Caplan was making a more nuanced argument than you suggested! I do think he makes a much more nuanced argument than the OP suggests however.

EAs seem generally receptive to resources like Emily Oster’s books, Brian Caplan’s book, or Scott Alexander’s Biodeterminist Guide (and its sequel), which all suggest to varying degrees that a significant amount of the toil of parenting can be forgone with near-zero cost.

I think this is not only false, but also none of the authors claim this.

Tentative Reasons You Might Be Underrating Having Kids

I am not excited. In my experience it is common for parents of young children to have a lot of ideas on this they are keen to implement but dial back on this as their kids get older. Implementing such ideas is a lot of work! You are not able to pursue a full-time career while fully homeschooling your kids. You would forfeit all the benefits of them growing bigger and needing you less. Also, my experience is that most parents realise that outdoing the traditional school system or alternatives with homeschooling is a much higher bar than they thought. This was definitely true for me. (My oldest is ~12.)

Tentative Reasons You Might Be Underrating Having Kids

Paraphrasing Caplan without doublechecking his sources: the shared environmental effects on politics and religion are on political and religious labels, not necessarily on actions. So your kid might also call themselves a Christian, but does not actually go to church that much.

I agree we shouldn't discourage EAs from having kids too much for some of the reasons you mention, but I am not sure who you are arguing against? I think anti-kid sentiment used to be stronger in the early days of EA but I have not seen it around in years.

Wanting to justify having children with a low chance that they are going to have a large impact later seems like a bad idea to me. It might hurt your relationship with them or worse, cause mental health issues. Have children if you want them, don't have any if you don't.

As Abby has said, I don't think a significant part of parenting toil can actually be foregone. To be fair, I don't think Scott or Bryan Caplan actually claims that it can be! Caplan argues against ferrying your kids to lots of different after-school activities. But frankly, I don't know any parent who does this in the first place.

I am not able to comment on how having children has impacted my aspirations or productivity, as I had my first child before I encountered EA (or finished school, for that matter).

Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)

Thank you for sharing!

My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally.

I share the experience that my concern is stronger when I am in a worse mood but I am not sure I share your conclusion.

My concern comes from an intuitive judgement when I am in a bad mood. When I am in a good mood it requires cognitive effort to remember how badly off many other people and animals are.

I don't want to deprioritise the worst off in favour of creating many happy lives in the future just because I have a very privileged life and "forget" how badly off others are.

Why I am probably not a longtermist

This is a link collection for content relevant to my post published since, for ease of reference.

Focusing on the empirical arguments to prioritise x-risks instead of philosophical ones (which I could not be more supportive of):

  1. Carl Shulman’s 80,000hours podcast on the common sense case for existential risk

  2. Scott Alexander writing about the terms long-termism and existential risks

On the definition of existential risk (as I find Bostrom’s definition dubious):

  1. Linch asking how existential risk should be defined

  2. Based on this comment thread in a different question by Linch

  3. Zoe’s paper which also has other stuff I have not yet read in full

How GCBRs could remain a solved problem, thereby getting us closer to existential security:

  1. A blogpost by Carl which cross-posted to the EA Forum later than it was published on the blog
Free-spending EA might be a big problem for optics and epistemics

You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I think neartermist is completely fine. I have no negative associations with the term, and suspect the only reason it sounds negative is because longtermism is predominant in the EA Community.

Democratising Risk - or how EA deals with critics

Is there a non-PDF version of the paper available? (e.g. html)

From skimming a couple of the argments seem to be the same I brought up here so I'd like to read the paper in full, but knowing myself I won't have the patience to get through a 35 page pdf.

Load More