Okay, hear me out: we already have human level/ general intelligent human aligned agents by the billions. And making new ones artificially seems to be really hard according to lots of alignment researchers. So why haven't I heard or read about projects trying to improve human intelligence? I'm not saying to do it in order to solve the alignment of pure artificial intelligences (although that could be a possibility too), but to the point of getting a hybrid (natural + artificial) superintelligence or just more useful than AIs.
I know that then there is the question about what a superintelligence would do or how having it aligned would help us to not make misaligned ones, and although there are possible good answers to that, those are questions that already exist outside "normal" alignment.
The only similar thing to that that I've heard was Elon Musk talking about Neuralink, but there is a hugeee difference between the things that Elon mentions as aspirations (in particular merging with an AI) and the product that is making right now. I don't see how Brain-Computer Interfaces would improve our decision making that much. A lot of people when they talk about augmenting intelligence seem to bring them up, and of course they could be useful, but again: I'm not talking about using them to solve AI Alignment, but to get around it. I'm wondering if we can find a way of scaling human intelligence in the same way we scale artificial intelligence.
I found a post that briefly mentions similar ideas than mine but under the term BCI, which I don't understand if it's a more abarcative term than "a device that allows you to use other devices with the mind", because as I said, I don't know any device which would improve our decision making that much if we could just use it with our minds.
The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks. Could that be possible? I know that it could sound crazy, but I guess I'm talking to the people who think aligning an AI is really difficult and that having superintelligences on humanity's side sooner or later seems like the only path forward.
I didn't know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP.
Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations.
And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI "neurons" (something similar to present day NN) could be enough to be recognize as neurons by the brain.
Maybe that doesn't sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.
But maybe it is wrong to think about the problem like that and the actual problem is easier.
I think that how important human cognitive enhancement might be depending on how quickly people think AI is coming and how transformative that AI will be. If we need aligned AI very quickly because we may all be wiped out, then that would take precedence. But if we have time, accelerating advances in human cognitive enhancement may be an extremely worthwhile endeavor. Morally and cognitively enhanced humans may be extremely motivated to do research in areas that EAs are interested in and create technology to mitigate disasters.