Object level point:"I don't have a good inside view on timelines, but when EY says our probability of survival is ~0% this seems like an extraordinary claim that doesn't seem to be very well supported or argued for, and something I intuitively want to reject outright, but don't have the object level expertise to meaningfully do so. I don't know the extent to which EY's views are representative or highly influential in current AI safety efforts, and I can imagine a world where there's too much deferring going on. It seems like some within the community have similar thoughts."
EY's view of doom being basically certain are fairly marginal. They definitely are part of the conversation, and he certainly is not the only person who holds them. But most people who are actively working on AI safety see the odds of survival as much higher than roughly 0% -- and I think most people see the P(doom) as actually much lower than 80%.
The key motivating argument for AI safety being important, even if you think that EY's model of the world might be false (though it also might be true) is that while it is easy to come up with plausible reasons to think that P(doom) is much less than 1, it is very hard to dismiss enough of the arguments for it to get p(doom) close to zero.
I think I'm already pretty familiar with thinking around this. What I don't know is if there is any way to get people who have different intuitions around these questions to converge or to switch intuitions.
So I'm pro-natalist in part because I see potential people who do not exist, but who might someday exist as being the sort of people who I can either help (by increasing their odds of someday existing and having a good life, or decreasing their odds of existing and having a bad life) or harm (by doing the opposite).
At a deep level this describes my feelings when I imagine the nearly infinite number of potential humans, when I imagine what my state was before I was conceived, and when I think about how happy I am to be alive, and how grateful I am that I got the chance to exist, when it easily could have been someone else, or when humanity easily could have failed to evolve at all.
So I very, very much intuitively feel like if I bring someone into existence who will have a good life, I just did something very nice for them. If I make it so that they don't come into existence, I did something extremely unkind to them.
And this intuition connects to all sorts of other identities and feelings I have, decisions I make, things I wish I had or could do, etc. As closely as I can tell it is deeply embedded in me.
It possibly has to do with the fact that I was homeschooled, so I never got bullied in school, and that I am thirty eight, and a couple of weeks ago I had some nasty mouth ulcers, and I realized that this was the physically most unpleasant thing I've ever gone through. What I'm saying, is I haven't ever actually suffered, and this feeds into my into my intuitions about the goodness of life.
But ultimately: I am pronatalist because I care about people who do not exist, and who therefore cannot either suffer or feel happiness. I am pronatalist because I think that it is possible to do something beneficial to individuals who do not currently exist, and who might never exist. It is not because I don't understand that they don't exist.
I could be wrong, but I'm pretty sure that most people who adopt a sort of pure longtermist utilitarianism already understand your argument here, but have different intuitions about it.
He should recognize that his autism (after I recognized the sort of errors my own mind makes in reading his apology email, I non-expert with an Asperger's diagnosis outside diagnosed him) makes him an idiot about PR things, and before making any future public announcements he should get several people who are 'woke' or whatever the right word to describe them is to read it first.
He also should introspect about the thing in his brain that made him feel like it was really, really important to be precise about what he thought about racism and eugenics in this apology, and he should recognize that sometimes it is not the time to say anything.
I mean, he made errors of judgement. Both 25 years ago, and last week. The one last week was actually a bigger error of judgement in my view, since he should have taken into account that he is currently in a position of public responsibility.
However the 'introspection' I want Bostrom to engage in is fundamentally different in kind from the 'introspection' that I think David wanted him to engage in.
But I also want to say here aloud: Bostrom is fine. He has no need at any point in this to engage in sincere repentence, introspection or remorse. He is not a bad person, and I would be happy to associate with him. He has shown no signs of factual views that are empirically untenable, and he has shown no sign of moral views that involve not valuing the well being of everyone in an approriate and equal manner, no matter who they are or where they came from.
He made a mistake in terms of communication and said something offensive twenty five years ago, that he understands was a mistake to say. But that mistake was one of judgement not of fundamental moral character.
You do not repent for making a mistake of judgement, you apologize for being dumb and move on.
There is nothing in this that indicates poor moral character or views that I find reprehensible in Bostrom. I do not view him as a sinner in need of repentence.
Further expecting those who have sinned to sincerely introspect and to sincerely repent is the sort of thing that religious fanatics and other sorts of bad people ask people to do.
That is my honest view. It is my honest view that David Mears is suggesting we create a community culture that is fundamentally designed to enforce conformity and prevent truth seeking. And I think just like those who think that discussion about race, genetics and intelligence should be allowed to happen somewhere (though that place definitely should not be the EA forum) need to ask themsevles 'is what I am thinking similar in some important way to what Nazis thought' and 'might allowing these conversations lead to somewhere bad and unfairly exclude people' those who want to demand the sort of conformist policy should ask themselves if this is similar to the sort of thought control that has been exerted by ideologically motivated villians throughout history, and if this sort of policy might lead to very bad places also.
Then go for it.
Come up with a detailed proposal, describe exactly how it would work, convince people to give you funding to run the experiment, and then report back and tell us how it went.
The default assumption always is that doing everything differently won't work very well. It doesn't matter what the precise change is. So skepticism is the correct attitude until this is proven that it can work.
It is a good idea though for the people who are enthused about this idea to follow their passion, and build and test concrete proposals. Go forth and try to make the world better.
Yeah, this is why earn to give needs to come back as a central career recommendation.
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
Yeah, I agree, there is a good reason they exist.
I don't think they are unreasonable either as individuals or in essays and conversations.
Further they are trying to do things to change the world in ways that we both agree would make it a better place. Possibly the movement is strongly net positive for the world.
But they also make people who are emotionally obsessed with the truth content of the things they say and believe feel excluded and unwelcome.