D

damc4

Currently working on my projects @ No organization
6 karmaJoined

Comments
5

I disagree.

In my opinion, it's likely that if government has operational ability of AGI, then it would likely lead to optimizing the AI for the benefit of a subset of the people in government, just like in case of private company.

Our forms of governments have been tried, and they work to some extent, but they haven't been tried in a situation where the elected people in the government have potential opportunity to seize so much power in such short time-frame.

In my opinion, the best way to ensure democratic outcome would be to train AI in decentralized way (using many small computers owned by many people, instead of big computers owned by a small group of people). The people in the government have interest in making that happen because if they don't, then they are at risk of losing the power to other people in the government (due to diminishing marginal utility of AI, losing power is a bigger cost than the benefit of being in power alone).

Some people can say that decentralized training would be inefficient, so it's not possible to do that. Maybe it's not possible now, but it can be done because human science is learned in decentralized way (with the brains of the scientists being computers in a decentralized network).

So, I think that government-led decentralized AI could be a good idea, but I'm not convinced to government-led centralized AI.

The other alternative is maybe open-source, but there are some problems with that as well like there would have to be a method to ensure that the open-source models don't contain a backdoor (that they are not secretly optimized for the interest of the company/organization that made the model).

If you believe that democratic outcome with centralized AI is possible, then how could that work? Who specifically would decide what the goal of AI is? And how would you ensure that it will be set to what has been decided and not to something else?

"Problems and needs: Sometimes it's really valuable to hear user problems or needs rather than suggestions. Is there something that frustrates you when you interact with the Forum? Is there anything you find yourself looking for or wanting when you're using the Forum?"

My biggest problem/frustration is when I write a post that I think is very important and has a lot of value, but I don't get any reaction (or close to 0) - no upvotes and no critique/feedback.

I have some idea how to solve that, but for now I just share a problem.

Edit: I was writing the comment when I was a little frustrated, and I think my post was not very logical. I still think that there is too much authority bias, but I think a feature like curation can potentially help with that, if implemented properly.

Personally, I'm against curating and track records. I asked Gemini to generate a report on "Is there too much authority bias in the world?" (some authority bias is rational, but the question is it too much) with a neutral prompt and the result which was based on empirical evidence was there is too much authority bias. I would like to see more posts from people who don't get a lot of attention.

I know from experience that when I want to communicate something to people, it's very hard to do that because people seem to have short attention span when you don't have credibility and it's very hard to strike a balance between conciseness and clarity in the post, when people are not willing to give a lot of attention.

I would like to be able to see all posts and I'm currently confused about the user interface. I click "Advanced filters and sorting", then I choose "New", "All posts" and "Show low Karma" and I can see "Frontpage posts" above all lists. The word "frontpage" suggests that those are the most upvoted posts. And almost all of them have many upvotes (which suggests that those are not all posts). I want to see all posts.

Answer by damc4*1
0
0

So, previously my conclusion was that it's a trade-off mostly between equality + contribution to alignment (I believe that my solution increases the probability of alignment) + AI faster so less people will die vs the risk of biological weapons.

I think the impact of equality and other things is higher because:

  1. I asked 8 AI models about what is the expected number of people who will die from biological weapons of which development was assisted by AI (more or less, that wasn't the exact question) to have an idea how serious the problem is, since I don't have a lot of knowledge about biology. I removed the outliers (the biggest and the lowest answer) and got the average. The average was small relative to how many people die in 1 year from normal reasons. So, that suggests that the other problems are larger.
  2. The idea is that if the powerful artificial intelligence is created later than sooner, then there is more time to prepare for biological weapons. But the event of humanity being prepared at X + Y time is dependent on the event of humanity being prepared at time X. Because the probability of both of those events depends on the same things like: are humans smart and considerate enough to prepare for that. Therefore, if the problem is not solved by time X, then it's also less likely to be solved by the time X + Y.
  3. I also considered the impact on animals and other non-human agents. One thing that I paid attention to is the following fact. Humans have some empathy, so it's likely that they share some resources and/or benefits of AI with humans. If there is one person in the position of power, then there is a high variance of how much benefits animals (and other agents) will get because it depends on one person. For example, if that person will turn out to be a psychopath, then animals might suffer from that outcome. However, the more people there are in the position of power, the lower the variance, because the amount of benefits that animals will get will depend on more people. For animals to get nothing, all people in the position of power would have to be psychopaths. Therefore, all else equal, lack of concentration of power among humans is the safer option which is desirable given the diminishing returns of resources (utility = log resources). I'm not saying that that is the only factor at play, but I think it's worth noting.

If anyone has something to add, feel free.