All of Achim's Comments + Replies

"While from an emotional perspective, I care a ton about our kids' wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings' lives through my career."

I find this distinction a bit confusing. After all, every hour spent with your kid is probably "relatively minor" compared to the counterfactual impact of that hour on "many beings' lives". So it seems to me that your evaluating personal costs and expected experiences and so on at all only makes sense if the kid's wellbeing is very importa... (read more)

Thanks for the article.

Did aspects of the child's wellbeing, expected life satisfaction, life expectation etc enter your considerations?

1
KidsOrNoKids
5mo
Thanks for the question! While from an emotional perspective, I care a ton about our kids' wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings' lives through my career. Thus, we looked at the child's wellbeing on a very high level - guessing that our children have good chances at a net positive life because they will likely grow up with lots of resources and a good social environment. The one aspect we weren't confident enough to just eyeball was whether lots of nanny would be bad for them, hence we did some research on that (under "Child health").

Thanks! I read it, it's an interesting post, but it's not "about reasons for his Ai skepticism ". Browsing the blog, I assume I should read this?

Which of David's posts would you recommend as a particularly good example and starting point?

3
mhendric
8mo
Depends entirely on your interests! They are sorted thematically https://ineffectivealtruismblog.com/post-series/ Specific recommendations if your interests overlap with Aaron_mai's: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.
8
JWS
8mo
Imo it would his Existential Risk Pessimism and the Time of Perils series (it's based on a GPI paper of his that he also links to) Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I've seen. I think he's pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work - "the time of perils", but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1] I think it's a rich vein of criticism that I'd like to see more xRisk-inclined EAs responed to further (myself included!) 1. ^ I don't want to spell the whole thing out here, go read those posts :)

"Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?"

It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from ... (read more)

Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?

If you don't want to justify your claims, that's perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don't act as if it's my "homework" to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like "quasi religious", "I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs", "just prone to conspiracy theories like QAnon", while at the same time you are unwilling or unable to name any examples for "what experts in the field think about what AI can actually do".

There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence.

 

I'd still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are "loads of arguments", this shouldn't be hard. Somebody asked for something like that here, and there aren't so many convincing answers, and no answers that would basically ... (read more)

2[anonymous]1y
Here are a couple of links: What does it mean to align AI with human values? The implausibility of intelligence explosion
-5
supesanon
1y

Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. 

 

I'd be interested in whether you actually tried that, and whether it's possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don't want to discuss your counterarguments with anybody.

4
supesanon
1y
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn't have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I've had with them they don't seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or som

Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?

1
mako yass
1y
I believe the forum allows commenting anonymously, though I wouldn't know how to access that feature. Psuedonyms would be a bit better, but it'll do.

I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.

While I am also worried by Will MacAskill's view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that "this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)".

In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn't exist in Switzerland. Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture and the limits of how hopeful we should be that relevantly similar EA-inspired policies will soon see widespread implementation .

 

Is there any empirical research on the motivation of voters (and non-voters) in this referendum? The swissinfo article you mention does not directly u... (read more)

"If organizations have bad aims, should we seek to worsen their decision-making?"

That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to "worsen" an opponent's decision-making, but influence it in a direction that is favorable ... (read more)

Yes, I think so! It seems like saying: "all the theoretical arguments for long-termism are extremely important because they imply things not implied by other theories" but when asked for the concrete implications the answer is: donating for something non-longtermists would like because it helps people today, while zhe future effects are probably vague.

The following quotes from the current Will McAskill podcast episode of 80,000 hours seem a weird combination to me:

  • "I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world curre
... (read more)
1[anonymous]2y
Is it because the second quote is saying EA is about doing the very best thing, and what's best for the long term is probably not what's best for the short term, while the third quote is saying funding a very broad, non-lethal health intervention is justifiable from a longtermist basis? 

Time-boxing and to-do lists

Tim Harford ist not convinced that it is a good idea to plan activities in advance and allocate them to blocks of calendar time, so-called "Timeboxing". Instead, you should prioritize everything and, so as not to let work expand beyond all limits, set deadlines. He refers to a study where students were supposed to plan their time daily instead of fixing rough, monthly goals. The daily "plans backfired disastrously: day after day, the daily planners would fall short of their intentions and soon became demotivated, spending less ti... (read more)

While 5% is alarming, you should notice that abukeki did not update much because of the crisis (if I understand it correctly), and so if your prior is lower than it should possibly stay lower.

As this is (probably) central to coordination: is there something like a clear decisionmaking structure to decide what "the community" actually wants (i.e., what is "ursuing EA goals", concretely, in a given situation if there are trade-offs)? Is there an overview/explanation of this structure?

Your Richland-Poorland example is indeed illustrative, thanks. However, it seems the problem caused by immigration does not only occur when incomes in Richland were equalized before the immigration, but rather they also occur when people care about the degree of income inequality in their own country.  So if Richlanders are free-market fans, but they do not like domestic inequality, they will want to keep the Poorlanders out.

However, socialism and open borders don't mix well, because once you turn a society into a giant workers' co-op, adding new members always comes at the expense of the current members. 

Why should that be the case? Wealth and income of this giant worker's co-op are not fixed, and why shouldn't they scale with the number of members?

5
Jason Brennan
3y
Let's say you have a 10 person workers' co-op which shares income equally. Each person now gets paid 1/10th the firm's profit. Thanks to diminishing marginal returns, if you add an 11th worker who is otherwise identical, they will contribute gross revenue/have a marginal product of labor that is less than the previous added worker's. When you divide the income by 11, everyone will make less. This is a well-known problem in the econ lit. Of course, in real life, workers are not homogenous, but the point remains that in general you get diminishing returns by adding workers.  As a toy illustration, suppose that there two countries, Richland and Poorland. Everyone in Richland makes $100,000/year. Everyone in Poorland makes $2,000/year. Suppose, however, that if half of the Poorlanders move to Richland, their income will up by a factor of 15, while domestic Richlanders’ income will increase by 10%. Thus, imagine that after mass immigration, Richland has 50,000 Poorland immigrants now making $30,000/year, plus it’s 100,000 native workers now each make $110,000 a year. From a humanitarian and egalitarian standpoint, this is wonderful. Further, this isn’t merely a toy example; these are the kind of income effects we actually see with immigration in capitalist economies. But this same miraculous growth looks far less sexy when it occurs in a democratic socialist society with equalized incomes. Imagine that democratic socialist Richland is considering whether to allowing 100,000 Poorlanders to immigrate. Imagine they recognize that Poorlander immigrants will each directly contribute about $30,000 a year to Richland economy, and further, thanks to complementarity effects, will induce the domestic Richlanders to contribute $110,000 rather than $100,000. But here the Richlanders might yet want to keep the Poorlanders out. After all, when the equalize income ((100,000 X $30,000 + 100,000 X $110,000)/200,000), average incomes fall to $70,000. Once we require equality, the Richl

However, if journalists just do opinion-writing on their substack, and that kind of journalism becomes dominant, these boundaries may dissolve. That is not necessarily a good thing, though.

This is really interesting. Thanks for the report.

Is the topic of arms trade that he mentions considered in the EA community?

Wie definieren wir große Koalition?

Ab wann gilt eine Schule eigentlich als geschlossen?