C

CBiddulph

141 karmaJoined Jul 2021Mountain View, CA, USA

Participation
6

  • Completed the AGI Safety Fundamentals Virtual Program
  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours
  • Attended an EAGx conference
  • Completed the Introductory EA Virtual Program

Comments
11

Sounds like there are four distinct kinds of actions we're talking about here:

  1. Bringing about positive lives
  2. Bringing about negative lives
  3. Preventing positive lives
  4. Preventing negative lives

I think I was previously only considering the "positive/negative" aspect, and ignoring the "bringing about/preventing" aspect.

So now I believe you'd consider 3 and 4 to be neutral, and 2 to be negative, which seems fair enough to me.

Why would my thinking that actions like using birth control are morally neutral imply that I should also think that having children is morally neutral?

Aren't you implying here that you think having children is not morally neutral, and so you would consider 1 to be positive? Wouldn't 1 best represent existential risk reduction - increasing the chances that happy people get to exist? It sounds like your argument would support x-risk reduction if anything.

I think Richard was trying to make the point that

  • You believe that actions that bring about or prevent the existence of future people have no moral valence
  • Therefore, you believe that an action that brings about suffering lives is also morally neutral
  • Therefore, you would take any small positive moral trade (like getting a lollipop) in exchange for bringing about arbitrarily large amounts of suffering lives

If I'm not misinterpreting what you've said, it sounds like you'd be willing to bite this bullet?

Maybe it's true that you won't actually be able to make these choices, but we're talking about thought experiments, where implausible things happen all the time.

Cool!

A philosopher shares his perspective on what we should do now to ensure that civilization would rebound if it collapsed.

The summary seems pretty reductive - I think most of the book is about other things, like making sure civilization doesn't collapse at all or preventing negative moral lock-in. I wonder how they chose it.

It's probably largely for historical reasons: the first real piece of "rational fiction" was Harry Potter and the Methods of Rationality by Eliezer Yudkowsky, and many other authors followed in that general vein.

Also, it can be fun to take an existing work with a world that wasn't very thoroughly examined and "rationalize" it by explaining plot holes and letting characters exploit the rules.

Hi Ada, I'm glad you wrote this post! Although what you've written here is pretty different from my own experience with AI safety in many ways, I think I got some sense of your concerns from reading this.

I also read Superintelligence as my first introduction to AI safety, and I remember pretty much buying into the arguments right away.[1] Although I think I understand that modern-day ML systems do dumb things all the time, this intuitively weighs less on my mind than the idea that AI can in principle be much smarter than humans, and that sooner or later this will happen. When I look specifically at the cutting-edge of modern AI tech like GPT-3, I feel like this supports my view pretty strongly, but I don't think I could give you a knockdown explanation for why typical modern AI doing dumb things seems less important; this is just my intuition. Usually, intuitions can be tested by seeing how well they make predictions, but the really inconvenient thing about statements about TAI is that they can never be validated.

As I've talked to people at EAGxBoston and EAG London, I've started to realize that my intuitions seem to be doing a lot of heavy lifting that I don't feel fully able to explain. Ironically, the more I learn about AI safety, the less I feel that I have principled inside views on questions like "what research avenues are the most important" and "what year will transformative AI happen." I've realized that I pretty much just defer to the weighted average opinion of various EA people who I respect. This heuristic is intuitive to me, but it also seems kind of bad.

I feel like if I really knew what I was talking about, I would be able to come up with novel and clever arguments for my beliefs and talk about them with utmost confidence, like Eliezer Yudkowsky with his outspoken conviction that we're all doomed; or I'd have a unique and characteristic view on what we can do to decrease AI risk, like Chris Olah with interpretability. Instead, I just have a bunch of intuitions, which to the extent they can be put into words, just boil down to silly-sounding things like, "GPT-3 seems really impressive, and AlexNet happened just 10 years ago and was less impressive. 'An AI that can do competent AI research' is really, really impressive, so maybe that will happen in... eh, I want to be conservative, so 20 years?"

Based on your post, I'm guessing maybe you have a similar perspective, but are coming at it from the opposite direction: you have intuitions that AI is not so big of a deal, but aren't really sure of the reasons for your views. Does that seem accurate?

Maybe my best-guess takeaway for now is that a lot of the differences between people who disagree about speculative things like this is differing priors, which might not be based in specific, articulable, and concrete arguments. For instance, maybe I'm optimistic about the value of space colonization because I read The Long Way to a Small Angry Planet, which presents a vision of a utopian interspecies galactic civilization that appeals to me, but doesn't make logical arguments for how it would work. Maybe I think that a sufficient amount of intelligence will be able to do really crazy things because I spent a lot of time as a kid trying to prove to people that I was smart and it's important to my identity. Or maybe I just believe these things because they're correct. I'm not sure I can tell.

I believe that as a community, we should really try to encourage a wide range of intuitions (as long as those intuitions haven't clearly been invalidated by evidence). The value of diverse perspectives in EA isn't a new idea, but if it's true that priors do a lot of work in whether people believe speculative arguments, it could be all the more important. Otherwise, there could be a strong self-selection effect for people who find EA's current speculations intuitive, since people who don't have articulable reasons for disagreement won't have much in the way to defend their beliefs, even if their priors are in fact well-founded.

  1. ^

    The claim that simulating all of physics would be “more easily implementable” than a standard friendly AI does seem pretty ridiculous to me now, though I'm not sure it accurately reflects his original point? I think the argument had more to do with considering counterfactuals rather than actually carrying out a simulation. I would still agree that this is pretty weird and abstract, though I don't think this point is that relevant anyway.

Thanks for the post, I think this does a good job of exploring the main reasons for/against community service.

I've heard this idea thrown around in community building spaces, and it definitely comes up quite often when recruiting. That is, people often ask, "you do all these discussion groups and dinners, but how do you actually help people directly? Aren't there community service opportunities?" This seems like a reasonable question, especially if you're not familiar with the typical EA mindset already.

I've been kind of averse to making community service a part of my EA group, mostly for fear of muddling our messaging. However, I think this is at least worth considering. Prefacing each community service session with a sort of "disclaimer" as you're describing sounds like a step in the right direction, though it also may set a weird tone of you're not careful. "You might feel warm and fuzzy feelings while doing this work, but please keep in mind that the work itself has a practically negligible expected impact on the world compared to a high-impact career. We're only doing this to bond as a community and reinforce our values. Now, let's get to work!"

I'd be very interested to see a post presenting past research on how community service and other "warm-fuzzy activities" can improve people's empathy and motivation to do good, particularly applying it to the context of EA. Although it seems somewhat intuitive, I'm very uncertain about how potent this effect actually is.

Maybe the process of choosing a community service project could be a good exercise in EA principles (as long as you don't spend too long on it)? "Given the constraint that they must be community service in our area, what are the most effective ways to do good and why?"

Service once every two weeks intuitively seems like a lot on top of all the typical EA activities a group does. I can imagine myself doing this once a month or less. If you have many active members in your group and expect each member to only go to every other service event on average, this could make more sense.

Thanks for taking this over from my way-too-ambitious attempt!

Meh, never mind. I get the feeling that unlike some Internet communities, most people in EA actually have important things to do. I spent a while placing pixels and got burnt out pretty quickly myself :)

Written hastily; please comment if you'd like further elaboration

I disagree somewhat; if we directly fund critiques, it might be easier to make sure a large portion of the community actually sees them. If we post a critique to the EA Forum under the heading "winners of the EA criticism contest," it'll gain more traction with EAs than if the author just posted it on their personal blog. EA-funded critiques would also be targeted more towards persuading people who already believe in the idea, which may make them better.

While critiques will probably be published anyway, increasing the number of critiques seems good; there may be many people who have insights into problems in EA but wouldn't have published them due to lack of motivation or an unargumentative nature.

Holding such a contest may also convey useful signaling to people in and outside the EA community and hopefully promote a genuine culture of open-mindedness.

Load more