Z

Zeusfyi

Founder @ Zeusfyi
-4 karmaJoined Apr 2024

Comments
4

This Is where I depart from most others: 

1. If you cannot define intelligence generalization scientifically in a complete and measurable way then this is a complete waste of time; you cannot assess risk usefully for something you cannot measure usefully. This is science 101

Here’s our definition at Zeusfyi

We define generalization in the context of intelligence, as the ability to generate learned differentiation of subsystem components, then manipulate, and build relationships towards greater systems level understanding of the universal construct that governs the reality. This is not possible if physics weren’t universal for feedback to be derived. Zeusfyi, Inc is the only institution that has scientifically defined intelligence generalization. The purest test for generalization ability; create a construct with systemic rules that define all possible outcomes allowed; greater ability to predict more actions on first try over time; shows greater generalization; with >1 construct; ability to do same; relative to others.

Singular intelligence isn’t alignable; super intelligence as being generally like 3x smarter than all humanity very likely can be solved well and throughly. The great filter is only a theory and honestly quite a weak one given our ability to accurately assess planets outside our solar system for life is basically zero. As a rule I can’t take anyone serious when it comes to “projections” about what ASI does, from anyone without a scientifically complete and measurable definition of generalized intelligence.

Here’s our scientific definition:

We define generalization in the context of intelligence, as the ability to generate learned differentiation of subsystem components, then manipulate, and build relationships towards greater systems level understanding of the universal construct that governs the reality. This is not possible if physics weren’t universal for feedback to be derived. Zeusfyi, Inc is the only institution that has scientifically defined intelligence generalization. The purest test for generalization ability; create a construct with systemic rules that define all possible outcomes allowed; greater ability to predict more actions on first try over time; shows greater generalization; with >1 construct; ability to do same; relative to others.

It’s not hard to measure or create logical rules tied to economic productivity and standards of living; the hard part is getting human beings to agree on things which are subjective, and convincing them to instead have objective reasoning.

If you were a researcher at Zeusfyi; here’s what our chief scientist would advise:

1. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers. The scalable alignment team at OpenAI has all of ~7 people.

Even a team 7 is more than sufficient to solve scalable alignment; the problem is stemming from lack of belief in your self; your own cause and ability to solve due to false belief in it being a resource issue. In general when you solve unknowns you need wider perspective + creative IQ which is not taught in any school, likely the opposite honestly; aka systems level thinkers who can relate X field to Y field and create solutions to unknowns from subknowns; most people are afraid to “step on toes” or whatever subdivision they live in; if you wanna do great research you need to be more selfish in that way 

2. You can’t solve alignment if you can’t even define and measure intelligence generality; you can’t solve what you don’t know.

3. There’s only one reason intelligence exists; if we lived in a universe that had physics that could “lie“ to you; and make up energy/rules, then nothing is predictable nor periodic, nothing can be generalized

You now have the tools to solve it