Thanks! I read it, it's an interesting post, but it's not "about reasons for his Ai skepticism ". Browsing the blog, I assume I should read this?
Which of David's posts would you recommend as a particularly good example and starting point?
"Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?"
It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from something etc. For example, you write "They may be interested in community input for their funding, via regranting for example, or invest in the Community". If you can't define the community, you cannot clearly say that someone invested in it. You later speak of "managing the relationship between the community and its most generous funder", but it seems hard to say how this relationship is currently managed if the community is so hard to define.
Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?
If you don't want to justify your claims, that's perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don't act as if it's my "homework" to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like "quasi religious", "I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs", "just prone to conspiracy theories like QAnon", while at the same time you are unwilling or unable to name any examples for "what experts in the field think about what AI can actually do".
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence.
I'd still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are "loads of arguments", this shouldn't be hard. Somebody asked for something like that here, and there aren't so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives?
I think so - see footnote 2 of the LessWrong post linked above.
Why not just go look for differing perspectives yourself?
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs
That this "known human characteristic" strongly applies to people working on AI safety is, up to now, nothing more than a claim.
(I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU).
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the 'pandemic ideology' of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side's reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I witnessed this lack of curiosity in my own cohort that completed AGISF. ... They are all very nice amicable people and despite all the conversations I've had with them they don't seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person.
I don't work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don't have to worry about things. So I can guarantee that I'd be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs.
I'd be interested in whether you actually tried that, and whether it's possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don't want to discuss your counterarguments with anybody.
Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?
I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
While I am also worried by Will MacAskill's view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that "this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)".