Hi Ada, I'm glad you wrote this post! Although what you've written here is pretty different from my own experience with AI safety in many ways, I think I got some sense of your concerns from reading this.
I also read Superintelligence as my first introduction to AI safety, and I remember pretty much buying into the arguments right away. Although I think I understand that modern-day ML systems do dumb things all the time, this intuitively weighs less on my mind than the idea that AI can in principle be much smarter than humans, and that sooner or later this will happen. When I look specifically at the cutting-edge of modern AI tech like GPT-3, I feel like this supports my view pretty strongly, but I don't think I could give you a knockdown explanation for why typical modern AI doing dumb things seems less important; this is just my intuition. Usually, intuitions can be tested by seeing how well they make predictions, but the really inconvenient thing about statements about TAI is that they can never be validated.
As I've talked to people at EAGxBoston and EAG London, I've started to realize that my intuitions seem to be doing a lot of heavy lifting that I don't feel fully able to explain. Ironically, the more I learn about AI safety, the less I feel that I have principled inside views on questions like "what research avenues are the most important" and "what year will transformative AI happen." I've realized that I pretty much just defer to the weighted average opinion of various EA people who I respect. This heuristic is intuitive to me, but it also seems kind of bad.
I feel like if I really knew what I was talking about, I would be able to come up with novel and clever arguments for my beliefs and talk about them with utmost confidence, like Eliezer Yudkowsky with his outspoken conviction that we're all doomed; or I'd have a unique and characteristic view on what we can do to decrease AI risk, like Chris Olah with interpretability. Instead, I just have a bunch of intuitions, which to the extent they can be put into words, just boil down to silly-sounding things like, "GPT-3 seems really impressive, and AlexNet happened just 10 years ago and was less impressive. 'An AI that can do competent AI research' is really, really impressive, so maybe that will happen in... eh, I want to be conservative, so 20 years?"
Based on your post, I'm guessing maybe you have a similar perspective, but are coming at it from the opposite direction: you have intuitions that AI is not so big of a deal, but aren't really sure of the reasons for your views. Does that seem accurate?
Maybe my best-guess takeaway for now is that a lot of the differences between people who disagree about speculative things like this is differing priors, which might not be based in specific, articulable, and concrete arguments. For instance, maybe I'm optimistic about the value of space colonization because I read The Long Way to a Small Angry Planet, which presents a vision of a utopian interspecies galactic civilization that appeals to me, but doesn't make logical arguments for how it would work. Maybe I think that a sufficient amount of intelligence will be able to do really crazy things because I spent a lot of time as a kid trying to prove to people that I was smart and it's important to my identity. Or maybe I just believe these things because they're correct. I'm not sure I can tell.
I believe that as a community, we should really try to encourage a wide range of intuitions (as long as those intuitions haven't clearly been invalidated by evidence). The value of diverse perspectives in EA isn't a new idea, but if it's true that priors do a lot of work in whether people believe speculative arguments, it could be all the more important. Otherwise, there could be a strong self-selection effect for people who find EA's current speculations intuitive, since people who don't have articulable reasons for disagreement won't have much in the way to defend their beliefs, even if their priors are in fact well-founded.
The claim that simulating all of physics would be “more easily implementable” than a standard friendly AI does seem pretty ridiculous to me now, though I'm not sure it accurately reflects his original point? I think the argument had more to do with considering counterfactuals rather than actually carrying out a simulation. I would still agree that this is pretty weird and abstract, though I don't think this point is that relevant anyway.
Thanks for the post, I think this does a good job of exploring the main reasons for/against community service.
I've heard this idea thrown around in community building spaces, and it definitely comes up quite often when recruiting. That is, people often ask, "you do all these discussion groups and dinners, but how do you actually help people directly? Aren't there community service opportunities?" This seems like a reasonable question, especially if you're not familiar with the typical EA mindset already.
I've been kind of averse to making community service a part of my EA group, mostly for fear of muddling our messaging. However, I think this is at least worth considering. Prefacing each community service session with a sort of "disclaimer" as you're describing sounds like a step in the right direction, though it also may set a weird tone of you're not careful. "You might feel warm and fuzzy feelings while doing this work, but please keep in mind that the work itself has a practically negligible expected impact on the world compared to a high-impact career. We're only doing this to bond as a community and reinforce our values. Now, let's get to work!"
I'd be very interested to see a post presenting past research on how community service and other "warm-fuzzy activities" can improve people's empathy and motivation to do good, particularly applying it to the context of EA. Although it seems somewhat intuitive, I'm very uncertain about how potent this effect actually is.
Maybe the process of choosing a community service project could be a good exercise in EA principles (as long as you don't spend too long on it)? "Given the constraint that they must be community service in our area, what are the most effective ways to do good and why?"
Service once every two weeks intuitively seems like a lot on top of all the typical EA activities a group does. I can imagine myself doing this once a month or less. If you have many active members in your group and expect each member to only go to every other service event on average, this could make more sense.
Thanks for taking this over from my way-too-ambitious attempt!
Meh, never mind. I get the feeling that unlike some Internet communities, most people in EA actually have important things to do. I spent a while placing pixels and got burnt out pretty quickly myself :)
Written hastily; please comment if you'd like further elaboration
I disagree somewhat; if we directly fund critiques, it might be easier to make sure a large portion of the community actually sees them. If we post a critique to the EA Forum under the heading "winners of the EA criticism contest," it'll gain more traction with EAs than if the author just posted it on their personal blog. EA-funded critiques would also be targeted more towards persuading people who already believe in the idea, which may make them better.
While critiques will probably be published anyway, increasing the number of critiques seems good; there may be many people who have insights into problems in EA but wouldn't have published them due to lack of motivation or an unargumentative nature.
Holding such a contest may also convey useful signaling to people in and outside the EA community and hopefully promote a genuine culture of open-mindedness.
You cited a Gallup poll that said that 1 in 25 adults said that high school was the "worst period in their life." You presented this as positive evidence, but this seems to me like a strong point against your thesis.
To illustrate this with a simple model, we can imagine that the average survey respondent is 40 years old and that they split up their life into 10 4-year "periods." If the quality of people's lives are about evenly distributed across time, we'd expect high school to be the worst period for 10% of respondents, which is way more than 4%.
More importantly, 54% of respondents to that same survey said that high school was a great period in their life, and 7% said it was the best period. This makes me skeptical of the rest of your argument.
To steelman against this contradictory evidence: it seems reasonably likely that people look back on their past with nostalgia, biasing them towards believing it was better than it was.