All of CBiddulph's Comments + Replies

Sounds like there are four distinct kinds of actions we're talking about here:

  1. Bringing about positive lives
  2. Bringing about negative lives
  3. Preventing positive lives
  4. Preventing negative lives

I think I was previously only considering the "positive/negative" aspect, and ignoring the "bringing about/preventing" aspect.

So now I believe you'd consider 3 and 4 to be neutral, and 2 to be negative, which seems fair enough to me.

Why would my thinking that actions like using birth control are morally neutral imply that I should also think that having children is

... (read more)
1
Noah Scales
2y
You are correct about my assessments of 2-4. I would add 5 and 6: 1. Bringing about conception of positive lives (morally neutral) 2. Bringing about conception of negative lives (morally negative) 3. Preventing conception of positive lives (morally neutral) 4. Preventing conception of negative lives (morally neutral)  5. making existing lives more negative (morally negative) 6. making existing lives more positive (morally positive) I see having children as either morally neutral or negative toward the child, not morally positive or negative toward the child. I see having children as morally negative toward other people, in our current circumstances. Overall, any decision to have children is hard to justify as morally neutral. I guess I would feel more inclined to add: 7. Bringing about conception of positive lives that are also positive for other people (morally positive) for the sake of the thought experiment. Is there some perspective or implication that I'm still missing here? I would like to know.

I think Richard was trying to make the point that

  • You believe that actions that bring about or prevent the existence of future people have no moral valence
  • Therefore, you believe that an action that brings about suffering lives is also morally neutral
  • Therefore, you would take any small positive moral trade (like getting a lollipop) in exchange for bringing about arbitrarily large amounts of suffering lives

If I'm not misinterpreting what you've said, it sounds like you'd be willing to bite this bullet?

Maybe it's true that you won't actually be able to make these choices, but we're talking about thought experiments, where implausible things happen all the time.

1
Noah Scales
2y
I think that actions that avoid the conception of future people (for example, possible parents deciding to use birth control) have no moral significance as far as the future moral status of the avoided future being goes since that being never exists.   Why would my thinking that actions like using birth control are morally neutral imply that I should also think that having children is morally neutral? Perhaps I will understand this better if you explain this to me carefully like I'm not that smart.

Cool!

A philosopher shares his perspective on what we should do now to ensure that civilization would rebound if it collapsed.

The summary seems pretty reductive - I think most of the book is about other things, like making sure civilization doesn't collapse at all or preventing negative moral lock-in. I wonder how they chose it.

Yes, it's quite bad. NYT bestseller one-sentence summaries are weirdly bad. The summary of "Godel, Escher, Bach" was "A scientist argues that reality is a system of interconnected braids"; whoever wrote that sentence clearly hadn't read the book.

It's probably largely for historical reasons: the first real piece of "rational fiction" was Harry Potter and the Methods of Rationality by Eliezer Yudkowsky, and many other authors followed in that general vein.

Also, it can be fun to take an existing work with a world that wasn't very thoroughly examined and "rationalize" it by explaining plot holes and letting characters exploit the rules.

Hi Ada, I'm glad you wrote this post! Although what you've written here is pretty different from my own experience with AI safety in many ways, I think I got some sense of your concerns from reading this.

I also read Superintelligence as my first introduction to AI safety, and I remember pretty much buying into the arguments right away.[1] Although I think I understand that modern-day ML systems do dumb things all the time, this intuitively weighs less on my mind than the idea that AI can in principle be much smarter than humans, and that sooner or lat... (read more)

4
Ada-Maaria Hyvärinen
2y
Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid. You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my tasks was also to explain people why AI is not a silver bullet and that modern ML solutions require things like training data and interfaces in order to be created and integrated to systems. Obviously, if the task is to find out all things that can future AI systems might be able to do at some point, you should take a quite different attitude than when trying to estimate what you yourself can implement right now. This is why I try to take a less conservative approach than would come naturally to me, but I think it still comes across as pretty conservative compared to many AI safety folks. I also find GPT-3 fascinating but I think the feeling I get from it is not "wow, this thing seems actually intelligent" but rather "wow, statistics can really encompass so many different properties of language". I love language so it makes me happy.  But to me, it seems that GPT-3 is ultimately a cool showcase of the current data-centered ML approaches ("take a model that is based on a relatively non-complex idea[1], pour a huge amount of data into it, use model"). I don't see it as a direct stepping stone to science-automating AI, because it is my intuition that "doing science well" is not that well encompassed in the available training data. (I should probably reflect more on what the concrete difference is.) Importantly, this does not mean I believe there can be no risks (or benefits!) from large language models, and models t

Thanks for the post, I think this does a good job of exploring the main reasons for/against community service.

I've heard this idea thrown around in community building spaces, and it definitely comes up quite often when recruiting. That is, people often ask, "you do all these discussion groups and dinners, but how do you actually help people directly? Aren't there community service opportunities?" This seems like a reasonable question, especially if you're not familiar with the typical EA mindset already.

I've been kind of averse to making community service ... (read more)

2
blainehansen
2y
Yeah I completely agree the disclaimer needs to be carefully worded ha. It feels like the disclaimer should err towards being a prompt about how we could be more impactful in our local community rather than simply stating that this activity isn't very impactful. I'm gravitating toward something like this:
7
Erin Braid
2y
I like this idea and would even go further -- spend as much time on it as people are interested in spending, the decision-making process might prove educational! I can't honestly say I'm excited about the idea of EA groups worldwide marching out to pick up litter. But it seems like a worthwhile experiment for some groups, to get buy-in on the idea of volunteering together, brainstorm volunteering possibilities, decide between them based on impact, and actually go and do it. 

Thanks for taking this over from my way-too-ambitious attempt!

Meh, never mind. I get the feeling that unlike some Internet communities, most people in EA actually have important things to do. I spent a while placing pixels and got burnt out pretty quickly myself :)

2
Gruffydd Gozali
2y
We're making progress! Resources to join are in this forum post: https://forum.effectivealtruism.org/posts/bieiFE5GXxEAo3ptf/ea-logo-and-title-on-reddit-s-r-place
9
Gruffydd Gozali
2y
Yeah I would love to do this too but I feel like we’d need to do it in collaboration with another subreddit/community, I was thinking r/neoliberal as they have quite a few EAs and do charity drives for malaria nets. Want to help me out and spread the word there?

Written hastily; please comment if you'd like further elaboration

I disagree somewhat; if we directly fund critiques, it might be easier to make sure a large portion of the community actually sees them. If we post a critique to the EA Forum under the heading "winners of the EA criticism contest," it'll gain more traction with EAs than if the author just posted it on their personal blog. EA-funded critiques would also be targeted more towards persuading people who already believe in the idea, which may make them better.

While critiques will probably be published anyway, increasing the number of critiques seems good; there ... (read more)

You cited a Gallup poll that said that 1 in 25 adults said that high school was the "worst period in their life." You presented this as positive evidence, but this seems to me like a strong point against your thesis.

To illustrate this with a simple model, we can imagine that the average survey respondent is 40 years old and that they split up their life into 10 4-year "periods." If the quality of people's lives are about evenly distributed across time, we'd expect high school to be the worst period for 10% of respondents, which is way more than 4%.

More imp... (read more)

3
kirchner.jan
2y
That's true, I didn't realize that... But I don't actually think it's incompatible with the rest of the argument.  1. even if it's not the worst for everyone, it could still be one of the worst experiences for a lot of people (the 35% describing it as a so-so time in the Gallup poll). (related) 2. even if only 1 in 25 had a very bad time in high school,  that would mean that their life satisfaction has to be a lot lower than baseline for the average to come out at -0.2. In particular, 0×0.96+x×0.04=−0.2, then x=−5 and we are in an extremely severe territory in terms of EQ5D. 3. your argument about nostalgia also appears very strong to me