nonn

250Joined Feb 2018

Posts
1

Sorted by New

Comments
25

No, though maybe you're using the word "intrinsically" differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.

I think any argument about creating people/etc is instrumental - will they or won't they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women's lives). TBH given your above arguments, I'm confused about the focus on abortion - it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.

For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through "the technological precipice", where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don't have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer 'unwanted children' [probably worse life outcomes], etc. Then abortion access seems good.

Maybe related: in general when maximizing, I think it's probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)

I don't think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)

I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.

I guess I did mean aggregate in the 'total' well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).

For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.

I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)

abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value

I'm a bit confused by this statement. Is a world where people don't have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me

To be clear I don't think it's worth discussing abortion at length, especially considering bruce's comment. But I really don't think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.

Agree that was a weird example.

Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren't obviously promising at the time, but are doing good work now.

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think this is conscious, just vague 'feels like this is wrong' when people say [thing I'm not the best at/dislike] is the most important. This is not to say set X doesn't have major problems

They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.

Also, I am concerned about motivated reasoning within the community, but don't really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

Your second question "Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?" seems to ignore that a major group EAs will be running against will be democrats in primaries.

So it's not only that you're creating large incentives for republicans to attack EA, you're also creating it for e.g. progressive democrats. See: Warren endorsing Flynn's opponent & somewhat attacking flynn for crypto billionaire sellout stuff

That seems potentially pretty harmful too. It'd be much harder to be an active group on top universities if progressive groups strongly disliked EA.

Which I think they would, if EAs ran against progressives enough that Warren or Bernie or AOC more strongly criticized EA. Which would be in line the incentives we're creating & general vibe [pretty skeptical of a bunch of white men, crypto billionaires, etc].

Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1/2 universes you die, & in 1/2 you more than double. From the perspective of all things I might care about in the multiverse, this is just "make more stuff that I care about exist in the multiverse, with certainty"

Or more intuitively, "with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous".

Or if you repeat it, you have "move all civilizations into a few giant universes, and make them dramatically more prosperous.

Which is clearly good under most views, right?

Load More