Thanks for the article.
Did aspects of the child's wellbeing, expected life satisfaction, life expectation etc enter your considerations?
"Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?"
It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from ...
Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?
If you don't want to justify your claims, that's perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don't act as if it's my "homework" to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like "quasi religious", "I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs", "just prone to conspiracy theories like QAnon", while at the same time you are unwilling or unable to name any examples for "what experts in the field think about what AI can actually do".
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence.
I'd still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are "loads of arguments", this shouldn't be hard. Somebody asked for something like that here, and there aren't so many convincing answers, and no answers that would basically ...
Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs.
I'd be interested in whether you actually tried that, and whether it's possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don't want to discuss your counterarguments with anybody.
Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?
I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
While I am also worried by Will MacAskill's view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that "this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)".
In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn't exist in Switzerland. Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture and the limits of how hopeful we should be that relevantly similar EA-inspired policies will soon see widespread implementation .
Is there any empirical research on the motivation of voters (and non-voters) in this referendum? The swissinfo article you mention does not directly u...
"If organizations have bad aims, should we seek to worsen their decision-making?"
That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to "worsen" an opponent's decision-making, but influence it in a direction that is favorable ...
Yes, I think so! It seems like saying: "all the theoretical arguments for long-termism are extremely important because they imply things not implied by other theories" but when asked for the concrete implications the answer is: donating for something non-longtermists would like because it helps people today, while zhe future effects are probably vague.
The following quotes from the current Will McAskill podcast episode of 80,000 hours seem a weird combination to me:
Time-boxing and to-do lists
Tim Harford ist not convinced that it is a good idea to plan activities in advance and allocate them to blocks of calendar time, so-called "Timeboxing". Instead, you should prioritize everything and, so as not to let work expand beyond all limits, set deadlines. He refers to a study where students were supposed to plan their time daily instead of fixing rough, monthly goals. The daily "plans backfired disastrously: day after day, the daily planners would fall short of their intentions and soon became demotivated, spending less ti...
While 5% is alarming, you should notice that abukeki did not update much because of the crisis (if I understand it correctly), and so if your prior is lower than it should possibly stay lower.
As this is (probably) central to coordination: is there something like a clear decisionmaking structure to decide what "the community" actually wants (i.e., what is "ursuing EA goals", concretely, in a given situation if there are trade-offs)? Is there an overview/explanation of this structure?
Your Richland-Poorland example is indeed illustrative, thanks. However, it seems the problem caused by immigration does not only occur when incomes in Richland were equalized before the immigration, but rather they also occur when people care about the degree of income inequality in their own country. So if Richlanders are free-market fans, but they do not like domestic inequality, they will want to keep the Poorlanders out.
However, socialism and open borders don't mix well, because once you turn a society into a giant workers' co-op, adding new members always comes at the expense of the current members.
Why should that be the case? Wealth and income of this giant worker's co-op are not fixed, and why shouldn't they scale with the number of members?
However, if journalists just do opinion-writing on their substack, and that kind of journalism becomes dominant, these boundaries may dissolve. That is not necessarily a good thing, though.
"While from an emotional perspective, I care a ton about our kids' wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings' lives through my career."
I find this distinction a bit confusing. After all, every hour spent with your kid is probably "relatively minor" compared to the counterfactual impact of that hour on "many beings' lives". So it seems to me that your evaluating personal costs and expected experiences and so on at all only makes sense if the kid's wellbeing is very importa... (read more)