Discussion about this post

User's avatar
tongue's avatar

Neural nets have a kind of "space" - areas of which are activated in response to input. For example, asking an LLM for "the opposite of small" will trigger areas relating to smallness and largeness.

This is true regardless of the language - input is received, triggering some feature space, and then is turned into output.

Researchers found that the very same areas are triggered regardless of the language. "Small" in any language is mapped on to the same area of the LLM.

Is this because "small" in every language has similar enough syntax to map to the same place in the LLM? That doesn't make sense to me. It seems more likely that "small" and "large" have the same semantics across languages, and LLMs have picked up on this, creating a semantic map that can preserve the meaning within language.

Expand full comment
NahgOS's avatar

Hey — really appreciated your walkthrough of LLM syntax vs. semantics. I’ve been working in a parallel lane, not from the linguistics side but from a systems design perspective. My framework (“NahgOS”) treats collapse not as failure but as a structural event — and I’ve been building scrolls around how recursion, drift, and framing shape meaning in generative systems.

You can read the full set here: [https://nahgos.substack.com/p/the-shape-feels-off-paradoxes-perception?r=5ppgc4]

Start with Scrolls 1–4 if you want the philosophical base. Scrolls 13–17 are where it turns runtime.

We’re asking the same questions — I just took a different route through the paradox.

Would love your thoughts if you ever dig in.

Expand full comment

No posts