Patricio

Comments
27

Topic Contributions
1

Oh, I didn't know that the field was so against AI X-risks. Because when I saw this https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/ 5-10% of X-risks seemed enough to take them seriously. Is that survey not representative? Or is there a gap between people recognizing the risks and giving them legitimacy?

Interesting. 

I'm not sure I understood the first part and what f(A,B) is. In the example that you gave B is only relevant with respect of how much it affects A ("damage the reputability of the AI risk ideas in the eye of anyone who hasn't yet seriously engaged with them and is deciding whether or not to"). So, in a way you are still trying to maximize |A| (or probably a subset of it: people who can also make progress on it (|A'|)). But in "among other things" I guess that you could be thinking of ways in which B could oppose A, so maybe that's why you want to reduce it too. The thing is: I have problems visualizing most of B opposing A and what that subset (B') could even be able to do (as I said, outside reducing |A|). I think that that is my biggest argument, that B' is a really small subset of B and I don't fear them.

Now, if your point is that to maximize |A| you have to keep in mind B and so it would be better to have more 'legitimacy' on the alignment problem before making it viral you are right. So is there progress on that? Is the community building plan to convert authorities in the field to A before reaching the mainstream then?

Also are people who try to disprove the alignment problem in B? If that's the case I don't know if our objective should be to maximize |A'|. I'm not sure if we can reach a superintelligence with AI, so I don't know if it wouldn't be better to think about maximize the people trying to solve OR dissolve the alignment problem. If we consider that most people probably wouldn't feel strongly about one side of the other (debatable), then I don't think is that big of a deal bringing the discussions more to the mainstream. If AI risk arguments include that not matter how uncertain researchers are about the problem giving what's at stake we should lower the chances, then I even see B and B' smaller. But maybe I'm too optimistic/ marketplacer/ memer. 

Lastly, the maximum size of A is smaller the shorter the timelines. Are people with short timelines the ones trying to reach the most people in the short term?

I think EA has the resources to make the alignment problem viral or at least in STEM circles. Wouldn't that be good? I'm not asking if it would be an effective way of doing good, just a way. 

Because I'm surprise that not even AI doomers seem to be trying to reach the mainstream.

Wow, I didn't expected a response. I didn't know shortforms were that accessible and I thought I was just rambling in my profile. So I should clarify that when I say "what we actually want" I mean our actual terminal goals (if we have those). 

So what I'm saying is that we are not training AIs or creating any other technology to do our terminal goals but to do other things (of course they're specific because they don't have high capabilities). But in the moment that we create something that can take over the world, all of the sudden the fact that we didn't create it to do our terminal goals becomes a problem. 

I'm not trying to explain why present technologies have failures, but that misalignment is not something that appears with the creation of powerful AIs but that that is the moment when it becomes a problem, and that's why you have to create it with a different mentality than any other technology.

I'm still learning basic things about AI Alignment, but it seems to me that all AIs (and other technologies) already don't give us exactly what we want but we don't call that outer misaligned because they are not "agentic" (enough?). The thing is that I don't if there's a crucial? onthologic? property that make something agentic really, I think it could be just some type of complexity that we give a lot of value to. 

And also ML system are inner misaligned in a way because they can't generalize to everything from examples and we can see that when we don't like the results to a particular task that they give us. I don't think misaligned is maybe the word for these technologies, but really the important thing is that they don't do what we want them to do.

So the question really about AI risk is: are we going to build a superintelligent technology? Because that is the significant difference with the previous technologies. If that's the case, we are not going to be the ones influencing the future the most, building little by little what we actually want and stopping the use of technologies whenever they aren't useful. We are going to be the ones who would be turned off whenever the SuperAI find ourselves to do things far from what it want us to do.

I created an account and I'm pretty sure I still can't change or add anything.

I don't agree with the first statement neither understand what are you arguing for or against.

Maybe that doesn't sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.

But maybe it is wrong to think about the problem like that and the actual problem is easier.

Yeah, you are right. I guess that I was trying to say that I haven't heard of projects that try to do it from a "hardware" standpoint. Considering the limitations that the human brain has in relation with the scalable computers and AIs.

I didn't know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP. 

This doesn't make much sense to me; I'm not aware of relevant work or reasons to believe this is promising.

Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations. 

And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI "neurons" (something similar to present day NN) could be enough to be recognize as neurons by the brain.

Load More