Mangust

Independent self-taught builder exploring body-born meaning before LLM speech
-4 karmaJoined

Bio

I’m an independent self-taught developer based in Haifa, working on Garmon, an early research prototype around body-born meaning before LLM speech.

My current focus is public-safe framing: how a pre-speech layer in an LLM-based system might make body-contour candidates, meaning candidates, and passive context more inspectable without turning them into command authority, memory, behavior, truth, or implementation permission.

Garmon is not presented as a finished agent, product, autonomous system, or proof of subjective experience. The public version does not expose private code or internal architecture.

My background is non-academic. I learn mostly through practical work, independent study, and AI-assisted engineering review.

Right now I’m mainly looking for clear, critical feedback on whether the public framing is understandable, modest, and safe.

Public repository:
https://github.com/garmon-gca/garmon-world

How others can help me

I’d appreciate:

– Critical feedback on whether the public framing is clear, modest, and safe.

– Pointers to prior work on pre-speech reasoning layers, agent foundations, interpretability, affective computing, cognitive architectures, or internal-state models.

– Arguments against this direction, especially if the framing seems confused, misleading, too strong, or unsafe.

– Suggestions for making the first public artifact small, reviewable, and public-safe.

How I can help others

I’m happy to:

– Share a non-academic builder’s perspective on AI safety-adjacent systems.

– Help explain AI / AI safety concepts to Russian or Hebrew speakers with less technical background.

– Give feedback on public-facing explanations, especially where technical ideas need to be made clearer, safer, or less overclaimed.

– Brainstorm bounded, non-agentic architectures and safety boundaries.