Okay, hear me out: we already have human level/ general intelligent human aligned agents by the billions. And making new ones artificially seems to be really hard according to lots of alignment researchers. So why haven't I heard or read about projects trying to improve human intelligence? I'm not saying to do it in order to solve the alignment of pure artificial intelligences (although that could be a possibility too), but to the point of getting a hybrid (natural + artificial) superintelligence or just more useful than AIs.
I know that then there is the question about what a superintelligence would do or how having it aligned would help us to not make misaligned ones, and although there are possible good answers to that, those are questions that already exist outside "normal" alignment.
The only similar thing to that that I've heard was Elon Musk talking about Neuralink, but there is a hugeee difference between the things that Elon mentions as aspirations (in particular merging with an AI) and the product that is making right now. I don't see how Brain-Computer Interfaces would improve our decision making that much. A lot of people when they talk about augmenting intelligence seem to bring them up, and of course they could be useful, but again: I'm not talking about using them to solve AI Alignment, but to get around it. I'm wondering if we can find a way of scaling human intelligence in the same way we scale artificial intelligence.
I found a post that briefly mentions similar ideas than mine but under the term BCI, which I don't understand if it's a more abarcative term than "a device that allows you to use other devices with the mind", because as I said, I don't know any device which would improve our decision making that much if we could just use it with our minds.
The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks. Could that be possible? I know that it could sound crazy, but I guess I'm talking to the people who think aligning an AI is really difficult and that having superintelligences on humanity's side sooner or later seems like the only path forward.
There are discussions of improving intelligence through genetic enhancement technology. Superhuman intelligence inside humans would have a better shot of being aligned with human values. I'm not sure about BCI though. Here are some examples of discussions of genetic enhancement of cognitive ability if you want to research further:
[1] As the other commenter Zach Stein-Perlman noted, there is a section in Nick Bostrom's Superintelligence books where he describes enhancing human cognition.
[2] Nick Bostrom and Carl Shulman have an article entitled "Embryo Selection for Cognitive Enhancement: Curiosity of Game-changer?"
[3] Steve Hsu, physicist and co-founder of Genomic Prediction, discusses cognitive enhancement. You can see his article "Super-Intelligent Humans Are Coming."
[4] Polymath Gwern Branwen has a very comprehensive article about genetic enhancement entitled "Embryo Selection for Intelligence." He evaluates the costs and benefits to different kinds of enhancement technology.
[5] A group of 13 researchers published "Screening Human Embryos for Polygenic Traits Has Limited Utility" in 2019. It discusses some of the current limitations, namely the limited ability to predict IQ. More research is needed to improve selection for intelligence. Height is further along.
[6] The other limitation is the number of embryos. The number of embryos that can be used will increase greatly if it is possible to have in vitro gametogenesis in humans. This has already been achieved in mice. You can see Metaculus estimates for this.
The extent to which you are excited about this might depend to some extent on what you believe will happen with artificial intelligence and how quickly. If we are all gone in 10 years, it might not matter much. If we have hundreds of millions of aligned superhuman brain emulations doing research, then maybe it doesn't matter as much how quickly we achieve these. However, if you have longer AI timelines, then it might be very useful in creating aligned artificial intelligence or mitigating other disasters if we have many superhuman geniuses, especially if they are morally enhanced.
I have an article that discusses how the political and social environment might change depending on how possible genetic enhancement scenarios might play out. I have a few other articles defending the practice. I'm really interested in this stuff, so you can message me if you want to discuss more.
I don't agree with the first statement neither understand what are you arguing for or against.