TM

Toni MUENDEL

-10 karmaJoined Oct 2022

Comments
8

Mathematical problems (infinities) don’t need to reference the physical world. Math claims certainties, Science doesn’t. Science must reference the physical world.

AI like AlphaZero,  will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.

Great questions, I’m still putting some more thought into these. 
Thanks

I agree, my essay was tackling a lot. A series of short articles would be a better approach. But this was for the Future Fund Worldview Prize and they required one essay introducing  a worldview. I may choose you’re approach in the future. 
 

Regards

Nice catch. Yes the title could use some refining, but it does catch more attention.

The point that I am am trying to make in the essay is, AGI is possible but putting a date on when we will have an AGI is just fooling ourselves. 
 

Thanks for taking the time to comment.

Please see my response in Bold.

 

  1. It's way longer than necessary.

I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
 

2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won't come, and makes statements, but 0 evidence is there.

On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.


3. Some portions of his argument don't really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.

I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.

I agree, I wish I could have found a way to make it shorter and keep the knowledge intact. That’s something I will continue to work on.

I appreciate the comment.

Please see my reply’s in Bold
 

My phrasing below is more blunt and rude than I endorse, sorry. I’m writing quickly on my phone. I strong downvoted this post after reading the first 25% of it. Here are some reasons:

“Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”.” Seems like a mischaracterization, given that sufficient new evidence should be able to change a Bayesian’s mind (tho I don’t know much about the topic).

 

Yes, that is how Baysianism works. A Bayesian will change their mind based on either the confirming or disconfirming evidence.

 

“We cannot guess what knowledge people will create into the future” This is literally false, we can guess at this and we can have a significant degree of accuracy. E.g. I predict that there will be a winner in the 2020 US presidential election, even though I don’t know who it will be.

 

I agree with you, but you can’t predict this for things that will happen 100 years from now. We may find better ways to govern by then.

 

I can guess that there will be computer chips which utilize energy more efficiently than the current state of the art, even though I do not know what such chips will look like (heck I don’t understand current chips).
 

Yes, this is the case if progress continues. But it isn’t inevitable. There are groups attempting to create a society that inhibit progress. If our growth culture changes, our “chip” producing progress could stop. Then any prediction about more efficient chips would be an error.

 

“We can’t predict the knowledge we will have in the future. If we could, we would implement that knowledge today” Still obviously false. Engineers often know approximately what the final product will look like without figuring out all the details along the way.

 

Yes, that works incrementally. But what the engineers can’t predict is what their next, next renditions of their product will be. And those subsequent steps is what I’m referring to in my essay.

 

“To achieve AGI we will need to program the following:

knowledge creating processes emotions creativity free will consciousness” This is a strong claim which is not obviously true and which you do not defend. I think it is false, as do many readers. I don’t know how to define free will, but it doesn’t seem necessary as you can get the important behavior from just following complex decision processes. Consciousness, likewise, seems hard to define but not necessary for any particular behavior (besides maybe complex introspection which you could define as part of consciousness).

 

This is a large topic and requires much explanation. But in short, what makes a person are those things listed. And AGI will by definitions will be a person.

 

“Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems” This doesn’t make much sense to me. There is no rule that says people can solve an infinite number of problems. Again, the claim is not obviously true but is undefended.

 

I agree, the claims in my essay depend on progress and the universe being infinite.

If you are truly interested in going deep on infinity, have a look at the book “The Beginning of Infinity”.
 

Maybe you won’t care about my disagreements given that I didn’t finish reading. I had a hard time parsing the arguments (I’m confused about the distinction between Bayesian reasoning and fallibilism, and it doesn’t line up with my prior understanding of Bayesianism), and many of the claims I could understand seem false or at least debatable and you assume their true.

 

Yes, each of my claims are debatable and contain errors. They are fallible as are all our ideas. And I appreciate you stress testing them, you brought up many important points.
 

This post is quite long and doesn’t feature a summary, making it difficult to critique without significant time investment.

 

This is a challenge, most of these theories need much more detailed explanation, Not less. I wish I could find a way to summarize and keep the knowledge intact.


 

Thank you for taking the time to make your  comments.

I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.

Alpha zero (machine learning) vs problem solving about the nature of reality:


Alpha zero is given the basic rules of the game (people invented these rules).

Then it plays a game with finite moves on a finite board.  It finds the most efficient ways to win (this is where Bayesian induction works).

Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.

 

Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):

Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.

People had a problem of being bored. We invented games as a temporary solution to boredom.

Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.

 

The article you linked to:

Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.

Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.

 

AI - finite problem solving capabilities.(Baysianism works here)

People and AGI- infinite problem solving capabilities. (Popperian works here)

It’s a huge gap from one to the next.


I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it.  It’s  a work in progress.


Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.

Great questions and thank you for asking. I also had these questions come up in my own mind while learning this epistemology.

Here is how I understand the terms you mentioned:

Knowledge    Information with influence. Or information that has causal power (ie. genes, ideas). Fundamentally knowledge is our best guesses.

Understanding   Is part of a knowledge transfer process, which varies from subject to subject. It is the rebuilding of knowledge in ones own mind. In people it’s an attempt to replicate a piece of knowledge.

 

Trail and Error - Yes, I agree Alpha zero has more knowledge than Stockfish, but it’s not new knowledge to the world. Please let me try to explain, because this question also puzzled me for a while. A kind of trail and error happens in evolution as well. Genes create knowledge about the environment they live in buy replicating, with different variations (trial), and dying (error). Couldn’t a computer program do the same thing, only faster? I think it can. But in a simulated environment that people created. The difference is, genes have access to a niche in the physical world, where they confront problems in nature. They solve these problems or they go extinct. A computer program doesn’t have the same access to our physical environment. Therefore people must simulate it. But we still don’t know enough about our own environment to simulate it accurately enough, we have huge gaps in our knowledge about the laws of nature.

When a chess program, programs it’s own rules and step out of its’ game, that would hint at AGI.

 

Creativity in AI art generators - What you are seeing does not involve the creative process. Original art is being displayed and can be misunderstood as creative. It’s an algorithm made by people, to combine a variation of images, based on our inputs. The images are new an have never been seen before. But it’s not a creative, problem solving process that is happening.

 

I agree, there will be many cases where our AI will be useful and help people solve their problems, like Elisa whom you mentioned. People are still behind the scenes pulling the strings. And when people create new knowledge (like a deeper understanding phycology) we will include it in our programs and Elisa will work much better.

 

I really appreciate your questions. If you have anymore please don’t hesitate to ask.