aka G Gordon Worley III
I can't speak for everyone associated with integral autism, but at least for myself I don't see it as about avoiding tradeoffs so much as about making tradeoffs against a wider set of considerations that are often left out by EAs. For example, I'm generally more willing to evaluate interventions in non-consequentialist terms since looking at the consequentialist framing only, as many EAs do, can lead to classic right-magnitude-wrong-direction errors that would be easily caught by a deontological or virtue ethics frame.
But in practice I expect int/a to make its own errors that will need correcting. One way someone I know put it, EA is fundamentally Protestant, int/a is fundamentally Buddhist. Both want to do good in the world, but each has a different view of what that world is.
That post on deliberative alignment seems to be just about one method by which we might build aligned AIs, not about the idea of moral alignment in general.
I'm probably less skeptical than you are because take as evidence that we align humans to moral value systems all the time. And although we don't do it perfectly, there are some very virtuous folks out there who take their morals seriously. So I think alignment to some system of morality is certainly possible.
Whether or not we can figure out which moral judgements are "right" is another matter, although perhaps we can at least build AI that is aligned with universally recognized norms like "don't murder" and "save lives".
I have to admit, this is one of those ideas that's like "wow, how in all my years of thinking about AI safety have I not thought about this?" beyond "humans care about other beings, so AI will care if humans care". It's so obvious and important in hindsight that I'm a bit ashamed it was a blindspot. Many thanks for pointing it out!
Yes. It's hard to find people who are poorer because of automation once we smooth over short-term losses.
What's easier to find is people who felt poorer because they lost status, even though they actually had more purchasing power and could afford more and better goods. But they weren't actually economically poorer, just felt poorer because other people got richer faster than them.
I guess it depends on what kind of regulation you're thinking of.
While it's true that the US and EU value individual liberty highly, these countries are also quite motivated to regulate arms to maintain their technological lead over other countries, for example by regulating the export of cyber, nuclear, and conventional weapons and putting restrictions on who can be part of their supply chain. Smaller countries have been more willing to treat other countries as equals when it comes to arms and not worry about the possibility of attach since they feel little threat from each other if they don't share borders, whereas the US and EU have global concerns.
Based on this I expect the US and EU to be more likely to engage in the type of regulation that is relevant for controlling and limiting the development of TAI that poses a potential threat to humans, though you're right to point out that countries like China are more likely to impose regulations to control the near-term social harms of AI, whereas the US and EU are more likely to take a hands off approach.
So tl;dr: the US and EU will impose regulations where it matters to slow down the acceleration of progress so they can maintain control, but other countries might care more about social regulation that's comparatively less relevant for time to TAI.
Mostly seems like a good thing to me. The more chips needed to build AI are dependent on supply chains that run through countries that are amenable to regulating AI, the safer we are. To that end, routing as much of the needed chip supply chain through the US and EU seems most likely to create conditions where, if we impose regulations on AI, it will take years to build up supply chains that could circumvent those regulations.
Personally I downvoted this post for a few reasons:
To me this reads more like publicly posting content that was written only with an audience of folks working at CEA or similar orgs in mind. So I downvoted because it doesn't seem worth a lot of people reading it since it's unclear what value there is there for them. This isn't to say the intended message isn't worthwhile, only that the presentation in this particular post is insufficient.
I'd very much like to read a post providing evidence that there were many instances of sexual assault within the community if that's the case, especially if it's above the baseline of the surrounding context (whether that be people of similar backgrounds, living in similar places, etc.). And if CEA has engaged in misconduct I'd like to know about that, too. But I can't make any updates based on this post because it doesn't provide enough evidence to do so.
This is a short note to advise you against something called CouponBirds.
I don't know much about them other than they're creating lots of spam in a bid to soak up folks who are bummed about the loss of Amazon Smile. They've sent me emails and posted spammy comments on posts both here and on Less Wrong (I report them each time; they keep creating new accounts to post).
If you were thinking of using them, I encourage you to not because we should not support those who spam if we want to live in a world with less spam.
I think it more often goes the other way, in that there are interventions that look good to EAs that look less good to int/a. For example, I'm relative negative on unconditional cash transfers, and I think most of the evidence showing they work is too narrowly scoped and fails to consider what happens to a society that is nicer only because of handouts and is failing to build a self-sustaining economic engine needed for the niceness to persist. I know some such programs are aware of this problem and try to address it, but it also leaves me feeling like there might be better solutions.
I guess on the other side maybe I'd say EA is by default too negative on arts charities. I'm not saying that your typical arts charity is effective, but I am saying I think it'd be a mistake if we reallocated all arts funding to top GiveWell charities, as access to museums is worth something even if it's hard to qualify against human lives (perhaps more generally, I think not all goods are actually as fungible as the typical EA thinks).