Okay, hear me out: we already have human level/ general intelligent human aligned agents by the billions. And making new ones artificially seems to be really hard according to lots of alignment researchers. So why haven't I heard or read about projects trying to improve human intelligence? I'm not saying to do it in order to solve the alignment of pure artificial intelligences (although that could be a possibility too), but to the point of getting a hybrid (natural + artificial) superintelligence or just more useful than AIs.
I know that then there is the question about what a superintelligence would do or how having it aligned would help us to not make misaligned ones, and although there are possible good answers to that, those are questions that already exist outside "normal" alignment.
The only similar thing to that that I've heard was Elon Musk talking about Neuralink, but there is a hugeee difference between the things that Elon mentions as aspirations (in particular merging with an AI) and the product that is making right now. I don't see how Brain-Computer Interfaces would improve our decision making that much. A lot of people when they talk about augmenting intelligence seem to bring them up, and of course they could be useful, but again: I'm not talking about using them to solve AI Alignment, but to get around it. I'm wondering if we can find a way of scaling human intelligence in the same way we scale artificial intelligence.
I found a post that briefly mentions similar ideas than mine but under the term BCI, which I don't understand if it's a more abarcative term than "a device that allows you to use other devices with the mind", because as I said, I don't know any device which would improve our decision making that much if we could just use it with our minds.
The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks. Could that be possible? I know that it could sound crazy, but I guess I'm talking to the people who think aligning an AI is really difficult and that having superintelligences on humanity's side sooner or later seems like the only path forward.
(Update: I think I disagree with what I'm saying here, but I think it's worth saying.)
There are several things to say here, but I think the most important one is: superintelligent humans are not aligned.
If you take a random human, or even my neighbour, and you magically give them the power to do any optimisation task faster than any collection of humans and machines presently available, I would be very scared for the obvious reason that most humans kinda suck. This is sufficient as a counterargument imo.
But the more fundamental problem is that, depending on how this magical intelligence boost took place, I wouldn't even trust a superintelligent version of myself. Having that high of an intelligence changes my umwelt and the set of abstractions I can assign utility over.
Presently, I care that others are happy and that they have their wishes fulfilled. For myself, I care about what kind of "story" my life ends up being. I want to be a good book, as judged by my own quaint sensibilities. Perhaps most philosophically annoying is the idea that I want to be able to determine my own story as myself, via the exertion of "my own power".
But what happens when I discover that my notion of "a wish" is so confused relative to underlying physical reality that, given a much more precise grasp of reality, I have to make some arbitrary decisions about what my original notion supposedly refers to? How would I rescue my values from one umwelt to another?
"Making someone superintelligent" isn't as straightforward as locating a variable in their program, multiplying it by 1000, and leaving everything unchanged. There are degrees of freedom in how you'd implement the transformation. And for most persons, I'm not sure there even are ways of doing it without what basically amounts to killing the underlying person.
Intelligence augmentation probably wouldn't result in particular humans becoming overwhelmingly powerful. (But even if it did, I'm substantially more optimistic about what smart humans would do with the universe than you are; it would be weird if a much more capable version of someone did worse according to their own values.)