Mapping Activation Space: Peeking Inside the Model

The previous post covered what happens when you hit enter. Tokens flow through layers, probabilities get shaped, text comes out. System prompts anchor the model in activation space. Temperature controls how tightly it follows that anchor.

This post goes one level deeper, into the model’s layers. What are those activations? What shape do they take?

For this experiment, I used a well-documented workflow built around Google’s Gemma 2 2B model and the Gemma Scope residual stream SAEs. These are Sparse Autoencoders trained by Google on Gemma’s residual stream activations. They act as auxiliary models that decompose dense internal states into sparse, more interpretable features. The tools are Gemma-specific but the concepts apply to any transformer model.


Vectors: The Model’s Internal Language

Think about the gas laws. Compress a gas, it gets hotter. The macroscopic behavior is simple. Underneath, there’s a seething mass of atoms bouncing around, and the real explanation to why it gets hot when compressed lives in those microscopic dynamics.

Neural networks work the same way.

  • The macroscopic behavior is “the model talks about France.”
  • The microscopic dynamics are activation patterns flowing through 2304-dimensional space.

We’re mapping the microscopic level

A vector is a list of numbers. In Gemma 2 2B, the vector at each token position is a list of 2304 numbers. That’s the model’s d_model dimension, its internal working width.

No single number in that list is human-readable. But the pattern across all 2304 numbers encodes what the model “knows” at that point in the text.

When Gemma processes “The capital of France is”, it does not have a dedicated slot for “France” and another slot for “capital”. Instead, the model has a specific geometric direction for “France” and another direction for “capital”. It adds those vectors together.

The vector at the last token position is just the mathematical sum of all those active concepts.

Because it was built through vector addition, that single coordinate in 2304-dimensional space contains the combined geometry of “France”, “geography”, and “Paris is next”.

We cannot visualize that space directly, but the model moves through it constantly, and every prediction depends on exactly where that final coordinate lands.

Toy Models of Superposition

The Linear Representation Hypothesis and the Geometry of Large Language Models


The Residual Stream: The Highway Everything Rides On

Vectors are the payload; the Residual stream is the bus. It is the main data pipeline through the transformer.

That 2304-dimensional vector persists across all 26 layers. Each layer does not replace it outright. It reads the current residual stream, computes an update, and adds that update back in.

The token embedding creates the initial vector. Layer 0 attention reads it, computes an update, adds it back. Layer 0 MLP does the same. This continues through all 26 layers, and the final vector gets mapped to token probabilities.
That additive pattern is why the residual stream is such an important place to inspect. Earlier information can remain available while later layers keep reshaping it. What survives, what gets amplified, and what becomes less useful depends on the sequence of updates across the network.

  • A prompt goes in. The model processes it token by token.
  • At layer 12, each token has a vector of shape (2304,) in the residual stream.
  • The SAE encoder maps that dense vector into a sparse feature space (16,384 dimensions for the coarse SAE, 262,144 for the fine SAE) where only a small number of features activate.
  • The SAE decoder maps the sparse representation back into (2304,) to approximate the original residual state.
  • A steering vector is the decoder vector for a specific SAE feature. Adding it into the residual stream biases the model toward that feature’s pattern. Phase 2 will test this.

Why Layer 12?

Gemma 2 2B has 26 transformer layers, numbered 0 through 25. I’m hooking into layer 12 — roughly 46% depth.

The hook point is blocks.12.hook_resid_post, capturing the residual stream after layer 12 finishes processing.

Production inference engines and local runners use quantized formats and custom C++ engines for speed, which strips out the Python-level hook system.

To get around this, we load the exact same Gemma 2 2B weights from HuggingFace as native PyTorch tensors (raw weights), which gives us the ability to tap any layer.

Gemma Scope trained SAEs on every layer of Gemma 2 2B, so layer 12 was a deliberate choice was based on Google’s own interpretability research papers.

Announcing Gemma Scope 2 — AI Alignment Forum

Gemma_Scope_2_Technical_Paper.pdf


What Does “Mapping Activations” Mean?

  • Coarse (16K SAE): fewer features, each covering more conceptual ground.
  • Fine (262K SAE): more features, each with higher resolution on specific facets.

An SAE decomposes a single 2304-number vector into thousands of sparse components called features.

The residual stream is dense and compressed. Gemma has to represent a huge amount of structure in just 2304 dimensions, so many different patterns get packed together in superposition. That means individual residual dimensions are not clean, human-readable concept slots. A single dimension can participate in multiple unrelated behaviors depending on the context.

This is where the Sparse Autoencoder helps. Instead of trying to interpret the raw 2304-dimensional state directly, the SAE projects it into a much larger sparse feature space, such as 16K or 262K features. In that expanded space, only a small number of features activate for a given input, and those features are often easier to interpret than the original dense residual dimensions.

So back to the actual numbers. The SAE takes that tightly packed, dense circuitry and expands it out into a much wider space so we can see which patterns actually fired.

How sparse is sparse, what is a SAE?

Think of an audio spectrum analyzer. It takes a single, dense audio wave where the kick drum, bass, and vocals are all mashed together, and splits them out into distinct frequency bands. Most bands stay flat. Only the frequencies actually present in the audio spike up.

The SAE is a semantic spectrum analyzer.

The model’s Layer 12 vector is the dense audio wave, a 2304-number mess where all the active concepts are added together.

The SAE takes that dense wave and splits it out across 16,384 distinct “frequency bands” (features).

Because a single sentence only contains a handful of concepts, the SAE only spikes about 82 of those bands to represent the entire input.

The other 16,000+ bands stay flat at zero simply because those concepts are not in the sentence. Out of 16,384 possible patterns the SAE can detect, a typical input lights up fewer than 1% of them. The rest stay silent. That is what sparse means.

Feature #3999:   mean activation 3.67, selectivity 0.97  ← France (strongest signal)
Feature #11333:  mean activation 4.48, selectivity 0.89  ← France-related
Feature #9473:   mean activation 1.21, selectivity 0.88  ← France-related
Feature #14805:  mean activation 1.24, selectivity 0.86  ← France-related
Feature #6211:   mean activation 1.27, selectivity 0.86  ← France-related
...
Feature #6,004:  activation 0.00                         ← silent
Feature #6,005:  activation 0.00                         ← silent
[~11,000 more at zero]

 

The key column we should focus on is selectivity: how exclusively does a feature fire on France prompts versus neutral ones?

Feature #3999 scores 0.97. This means it almost exclusively fires on France-related input. Notice #11333 actually fires harder (mean 4.48 vs 3.67) but has lower selectivity. It bleeds into neutral prompts too. A feature that fires on everything is not telling you anything useful, no matter how loud it is.

I chose France as a sanity-check target because the interpretability method was already grounded in prior work, and Gemma Scope gave me a validated SAE setup to test whether my own pipeline outputted correct Data.

The experiment follows standard A/B logic.

  • Run 400 prompts total through Gemma: 200 France-themed prompts and 200 neutral filler prompts.
  • Capture the layer 12 activations for both sets.
  • Decompose both through the same SAEs.
  • Use Python to compare features that fire consistently on the France set but stay quiet on the neutral sets.

Now we have the candidates for “France features.” Features that fire on everything else can be safely considered noise.

Two Resolutions, One Concept

The 16K SAE (coarse) might give you one feature, call it #3999, that broadly responds to “France.” One fat bucket for the whole concept.

The 262K SAE (fine) might split that same concept into multiple features.

Feature #86473 fires on a subset of France prompts. Feature #249284 fires on a different subset. Feature #243533 on another.

Each one potentially encodes a different facet (cuisine, geography, landmarks, language), though we do not know which facets they are yet. More on that problem in a minute.

This is hierarchical feature decomposition.

Same logic as hand-designing a vision network:

  • Edges combine into shapes
  • Shapes combine into objects
  • Objects together in a scene change the meaning again (meaning as in what regions get excited to infer what it is)

but instead of happening across multiple sequential layers, this hierarchy exists at different levels of granularity within the exact same layer.

The coarse SAE has a smaller dictionary, so it is forced to find the whole object.

The fine SAE has a massive dictionary, so it can afford to isolate the specific parts and edges. Nobody hand-wired these decompositions. They emerged naturally from the training data.

The cross-resolution mapping step figures out which fine features correspond to which coarse features. It measures two things:

  • Activation correlation: Do they fire on the same prompts? (behavioral evidence)
  • Decoder cosine similarity: Do their decoder vectors point in the same direction in the 2304-dimensional residual stream? (Geometric evidence)

Here are the actual results from the experiment.

The Heatmap: Decoder Cosine Similarity

The heatmap shows the same data from a different angle. Each cell measures how much a coarse features and a fine features point in the same direction inside the model.

  • High values mean they’re encoding the same thing at different zoom levels.
  • Low values mean they’re unrelated.
CoarseStrongest Fine MatchSimilarityReading
C-3999F-2492840.80Near-identical direction. Almost certainly the same concept at different granularity
C-11333F-864730.54Strong overlap, plus two more matches at 0.42 and 0.34
C-9473F-2435330.47One clear sub-feature

Values near zero mean unrelated. Above 0.3 is meaningful geometric correspondence.

Each of the 262,000 fine features was compared against C-3999 to see how closely they point in the same direction. Almost all of them scored near zero, meaning no geometric alignment.
The histogram shows where the crowd is: piled up at zero, with a long empty stretch before the red line at 0.3. The handful of features past that line are the ones that actually share a direction with C-3999. That’s why 0.3 is the cutoff. It’s where the crowd ends and the signal begins.

Coarse-to-Fine Feature Decomposition

Every prompt was decoded through both SAEs at the same time. The coarse SAE gave us three broad France features. The fine SAE gave us six narrow ones.

The interactive dashboard below maps every possible pair of these coarse and fine features to measure how strong their relationship actually is.

All data shown is derived from real activations collected by running 400 prompts through Gemma 2 2B and decomposing layer 12 through both Gemma Scope SAEs.

The left side shows the decomposition graph. It maps exactly which fine features branch off from our three coarse anchors.

  • Link width represents behavioral correlation (do they fire together?).
  • Link color represents geometric similarity (do they point the same way?).

The right side plots these exact same pairs in metric space:

  • The X-axis is behavioral correlation: do they fire on the exact same prompts?
  • The Y-axis is geometric similarity: do their vectors point in the same direction inside the model?

Look at the color coding on the scatter plot. It tells the whole story:

Blue dots (Top Right): These are the true sub-feature matches. They have high correlation and high geometric similarity.

They co-fire AND point the same way inside the model. The strongest pair is C-3999 to F-249284.

Orange dots (Middle): These are partial overlaps.

They fire together often, but their geometry is drifting apart.

Red dots (Bottom): These are co-occurring concepts. They sit in the lower half with moderate correlation but cosine similarity near zero. These features fire on many of the same prompts but point in completely different directions.

They co-occur with France, but they are not encoding the same concept. They are related ideas that travel together, not the same idea at two zoom levels.

Reading the scatter plot

To better understand the graph, think of these features as passive sensors hooked up to the Layer 12 data bus:

1. The Empty Top-Left (High Similarity, Low Correlation)
If Sensor A and Sensor B point in the exact same direction, they are going to catch the exact same meandering vectors. Always. You physically cannot have a vector pass by that trips Sensor A but misses Sensor B. That is why high geometric similarity mathematically forces high correlation. The top-left is empty because it defies physics.

2. The Bottom-Right (Low Similarity, High Correlation)
Sensor A points at “France”. Sensor B points at “Food”. They point in completely different directions (low geometric similarity). But when the vector for “French Cuisine” meanders down the data bus, it contains enough geometry to trip both sensors at the same time. They fire together (high correlation) even though they are looking for different things. These are your co-occurring concepts.

3. Low Selectivity (The noisy features)
If a sensor points in a direction that catches “France” but also catches a bunch of other random vectors meandering by (like “Germany” or “cheese”), it will have a lower selectivity score. This is exactly what you saw with Feature #11333 earlier in the post—it fired loud, but it fired on too much unrelated traffic to be a clean “France” feature.


The Interpretation Gap

C-3999 gets labeled “France” because it reliably activates on France prompts and not on neutral ones. That’s standard practice in interpretability work. Label a feature by what activates it.

But the model doesn’t know the word “France.” It has a direction in 2304-dimensional space that, for whatever internal reason, turned out to be useful for predicting the next token when France-related patterns show up. We call it “France” because that’s the human category our test prompts were organized around. The model’s internal geometry might carve the world along boundaries that don’t align with our conceptual categories at all — we just can’t tell, because we only test with prompts organized around our categories.

What we actually knowWhat we’re assuming
C-3999 fires on France prompts, not neutralC-3999 “means” France
F-86473 fires on a subset of France promptsF-86473 is a France sub-concept
C-3999 and F-249284 point in similar directionsThey encode related meanings
Injecting C-3999’s direction changes output toward France-like textThe feature is causally involved in France generation

The labels (“French cuisine,” “Paris landmarks”) are human interpretations based on which prompts activate a feature. The model doesn’t have those labels. It has frozen weights and an activation landscape that we’re projecting human categories onto.

Neural nets don’t memorize data. They find regularities and generalize those regularities to new data. A model will generate plausible text about unicorns even though it’s never seen one described as real, because it’s learned the relational structure of mythical creatures, horses, and horns. The internal representation that enables that generalization doesn’t need to map onto our concept of “unicorn.” It just needs to be useful for next-token prediction. When we label a feature “France,” we’re assuming the model’s useful regularity aligns with our semantic boundary. Sometimes it does. Sometimes we don’t know.

This is the wall that people like Neel Nanda keep writing about in mechanistic interpretability research. It was interesting to actually hit it myself. I can identify which features fire and when.

We can measure geometric relationships between them. But mapping that to human-readable meaning is always an inference, never ground truth.

When I started this project, I wanted to build something like the Activation Space Navigator from the previous post, but using real model data derived from SAEs, I pictured clean clusters with labeled regions where you could point and say, “that’s France.”

The real data did not look like that. What it gave me instead were directions in the model’s internal representation that reliably correlate with France-themed input.


Every feature in the SAE is a literal 2304-dimensional vector stored in the SAE’s decoder matrix. Feature C-3999 is just row #3999 in that matrix. It acts as a static reference coordinate for “France”.

This exact mechanic applies to any concept. If we were testing Python code, HTML tables, or HTTP status codes, there would be a different row acting as the reference coordinate for that specific pattern.

The reference vectors in the SAE do not move. They are fixed in place like highway signs. The model’s dynamic state passes by them.

When the France prompt generates a vector that passes close to the C-3999 sign, that specific band on our semantic spectrum analyzer spikes.

Neutral prompts pass further away, so the band stays flat. Those spikes are the sparse values we actually record.

One thing worth naming: this is all an approximation. The SAE reconstructs the residual stream from its learned directions, and the reconstruction is not perfect. Some signal is always lost. We are working with a useful approximation of the model’s internal state, not the thing itself.

So it is not that “France activated these regions of the model’s weights.” The weights are frozen. The model has learned internal directions for France-related patterns, and when France text flows through, the residual stream aligns with those directions. That alignment is what we are measuring.


Closing the loop

The previous post described system prompts as activation-space manipulation. This experiment gives me supporting evidence for that framing.

The directions those prompts appear to push activations toward are measurable, and some of that structure can be decomposed into narrower features.

What I found is suggestive structure, not full semantic ground truth.

The coarse-to-fine matches look real enough to justify the next step, which is testing whether steering along those directions’ changes generation in a targeted way.

What’s Next for This Project

The activation mapping is done. I found and observed suggestive structure:

coarse France-related features that appear to decompose into finer sub-features, supported by both behavioral correlation and geometric similarity.

The GO/NO-GO question was whether the coarse-to-fine mapping would produce 3+ meaningful sub-features per coarse anchor.

C-11333 has three above threshold. That’s a GO.

Phase 2 is the actual Steering Experiment.

Take those mapped features and test whether multi-resolution steering (coarse “France” + fine sub-features) produces better, more targeted output than single-resolution steering alone.

If it does, that’s evidence the cross-resolution structure isn’t just a statistical artifact. It’s a lever we can pull to tweak the behavior of a model

If it doesn’t, the structure is real but doesn’t actually control what the model generates. Correlation isn’t causation, even inside a neural network.

Either way, I’ll know more about what those frozen weights are actually doing when we hit enter.


References

 

Fractals All the Way Down

I’m not a machine learning engineer. But I work deep enough in systems that when something doesn’t make sense architecturally, it bothers me. And LLMs didn’t make sense.

On paper, all they do is predict the next word. In practice, they write code, solve logic problems, and explain concepts better than most people can. I wanted to know what was in that gap.

I did some digging. And the answer wasn’t that someone sat down and programmed reasoning into these systems. Nobody did. Apparently it emerged. Simple math, repeated at scale, producing structure that looks intentional but isn’t.

But that simplicity didn’t come from nowhere.

Claude Shannon was running letter-guessing games in the 1950s, proving that language has predictable statistical structure.

 

Rosenblatt built the first neural network around the same time.

 

Backpropagation matured in the ’80s but computers were too slow and data was too small but the idea kept dying and getting resurrected for decades.

 

Then in 2017, a team at Google Brain published a paper called “Attention Is All You Need” and introduced the Transformer architecture.

This crystallized the earlier attention ideas into something that scaled.

Not a new idea so much as the right idea finally meeting the infrastructure that could support it.

  • GPUs that could parallelize the math.
  • High-speed internet that made massive datasets collectible.
  • Faster CPUs, SSDs, and RAM that kept feeding an exponential curve of compute and throughput.

 Each piece was evolving on its own timeline and they all converged around the same window. GPT, Claude, Gemini, all of it traces back to that paper landing at the exact moment the hardware could actually run what it described.

 From what I’ve learned and what I understand, here’s what happens under the hood.


One Moment in Time

The model sees a sequence of tokens and has to guess the next one.

Not full words “tokens”. Tokens are chunks: subwords, punctuation, sometimes pieces of words. “Unbelievable” might get split into “un,” “believ,” “able.” This is why models can handle rare words they’ve never seen whole they know the parts.

It’s also why current models can be weirdly bad at things like the infamous “how many r’s in strawberry” question and exact arithmetic. Because the model reads ‘strawberry’ as two chunks 'straw' and 'berry' it literally cannot see the individual letters inside them.”

But the principle is the same.

Every capability, every impressive demo, every unnerving conversation anyone’s ever had with an LLM comes back to this single act a mathematical system producing a weighted list of what might come next. “The cat sat on the…” and the model outputs something like:

mat:    35%
floor:  20%
roof:   15%
dog:     5%
piano:   3%
...thousands more trailing off into the decimals

 

Those probabilities aren’t hand-coded. They come from the model’s weights and billions of numbers that were adjusted, one tiny fraction at a time, by showing the model real human text and punishing it for guessing wrong.

The process looks like this:

Let’s take a real sentence “The capital of France is Paris”

Then we feed it in one piece at a time.

  • The model sees “The” and guesses the next token. The actual answer was “capital.” Wrong guess? Adjust the weights.
  • Now it sees “The capital” and guesses again. Actual answer: “of.” Adjust. “The capital of” → “France” → adjust. Over and over.

Do this across hundreds of billions to trillions of tokens from real human text and the weights slowly encode patterns of grammar, facts, reasoning structure, tone, everything.

That’s pretraining. Real data as the baseline. Prediction as the mechanism. The model is learning to mimic the statistical patterns of language at a depth that’s hard to overstate.

Then we Loop It

One prediction isn’t useful. But chain them together and something starts to happen.

The model picks a token, appends it, and predicts the next one. Repeat.

That’s the autoregressive loop: the system feeds its own output back in, one token at a time.

Conceptually it reprocesses the whole context each step; but in practice it caches(KV cache) intermediate computations so each new token is incremental. But the mental model of “reads it all again” is the right way to think about what it’s doing.

the model can “look back” at everything that came before and not just the last few tokens this is the core innovation of the Transformer architecture.

Older approaches like RNNs, compressed the entire history into a single state vector, like trying to remember a whole book by the feeling it left you with.

Transformers use a mechanism called Attention

which is essentially content-addressable memory over the entire context window each token issues a query and retrieves the most relevant pieces of the past.

Instead of compressing history into one state, the model can directly reach back and pull information from any earlier token

which is why it can track entities across paragraphs, resolve references, and maintain coherent structure over long passages.

It’s also why “context window” is a real architectural constraint. There’s a hard limit on how far back the model can look, and when conversations exceed that limit, things start falling off the edge. 

🗨️ Right here, with just these two pieces “next-token prediction and the loop” we already have something that can generate coherent paragraphs of text. No special architecture for understanding. Just a prediction engine running in a loop, and the patterns baked into its weights doing the rest.

 But this creates a question: if the model only ever produces a probability list, how do we actually pick which token to use?

Rolling the Dice

This is where sampling comes in.

 The model gives us a weighted list.

we roll a weighted die.

Temperature controls how hard we shake it and it reshapes the probability distribution.

🗨️ The raw scores are divided by the Temperature number before being converted to probabilities.

Gentle shake (low temperature) and the die barely tumbles and it lands on the heaviest side almost every time. The gaps between scores get stretched wide, so the top answer dominates. “Mat.” Safe. Predictable.

Shake it hard (high temperature) and everything’s in play. The gaps shrink, the scores flatten out, and long shots get a real chance. “Piano.” Creative. Surprising. Maybe nonsensical.

But temperature isn’t the only knob. There’s also top-k and top-p (nucleus) sampling, which control which candidates are even allowed into the roll.

Top-k says “only consider the 40 most probable tokens.”

Top-p says “only consider enough tokens to cover 95% of the total probability mass.”

These methods trim the long tail of weird, unlikely completions before the die is even cast. Most production systems use some combination of all three.

The weights of the model don’t change between rolls. it’s the same brain, the same probabilities, but different luck on each draw.

This matters because it’s how we can run the same model multiple times on the same prompt and get completely different outputs. Same terrain, different path taken. The randomness is a feature, not a bug.

Run that whole loop five times on the same input and we might get:

Run 1: "The cat sat on the mat and purred."
Run 2: "The cat sat on the mat quietly."
Run 3: "The cat sat on the roof again."
Run 4: "The cat sat on the piano bench."
Run 5: "The cat sat on the mat and slept."

 

Same model. Same weights. Same starting text. Five different outputs, because the dice rolled differently at each step and those differences cascaded.

Teaching the Model What “Good” Means

Pretraining gets us a model that knows what language looks like. It can write fluently, complete sentences, even produce things that resemble reasoning.

But it has no concept of “helpful” or “safe” or “that’s actually a good answer.” It’s just mimicking patterns. To get from raw prediction engine to something that feels like a useful assistant, we need another layer.

This is where Reinforcement Learning from Human Feedback (RLHF) comes in. which is essentially a feedback loop that turns a raw prediction engine into something with opinions

First, there’s supervised fine-tuning (SFT).

Take the pretrained model and train it further on curated examples of good assistant behavior

  • high-quality question-and-answer pairs
  • helpful explanations
  • well-structured responses

This is the “be helpful” pass. It gets the model into the right ballpark before the more nuanced optimization begins.

Preference optimization stage.

Take the fine-tuned model. Give it a prompt. Let it generate multiple candidate outputs using different sampling runs

same weights, different dice rolls, different results. Then a completely separate model “a reward model”, trained specifically to judge quality reads all the candidates and scores them. “Run 1 is an 8.5. Run 4 is a 4.”

Training: Take that ranking and tell the original model to adjust its weights so outputs like Run 1 become more probable and outputs like Run 4 become less probable.

Nudge billions of weights slightly. Repeat across millions of prompts. Sometimes the “judge” is trained from human preferences; sometimes it’s trained from AI feedback — same destination, different math.

The models we interact with today are the result of all that shaping. One set of weights that already absorbed the judge’s preferences. Often the judge doesn’t run at inference time its preferences are mostly baked into the weights though some systems still layer on lightweight filters or reranking.

Then It Gets Weird

 Train a small model to predict the next token and it mostly learns surface stuff: grammar, common phrases, local pattern matching.

"The sky is ___" → "blue."

Exactly what we can expect from a prediction engine.

But scale the same system up with more parameters, more data, more compute and new behaviors start showing up that nobody explicitly programmed.

A larger model can suddenly do things like:

  • Arithmetic-like behavior. Nobody gave it a calculator. It just saw enough examples of “2 + 3 = 5” and “147 + 38 = 185” that learning a procedure (or something procedure-shaped) reduced prediction error. Sometimes it’s memorization, sometimes it’s a learned algorithm, and often it’s a messy blend.
  • Code synthesis. Not just repeating snippets it saw, but generating new combinations that compile and run.
  • Translation and transfer. Languages, formats, and styles it barely saw during training suddenly become usable.
  • Multi-step reasoning traces. Following constraints, tracking entities, resolving ambiguity, and doing “if-then” logic over several steps.

The unsettling part to me at least is how these abilities appear.

Some researchers argue these cliffs are partially measurement artifacts, a function of how benchmarks score rather than a true discontinuity.

But the visible shift in capabilities with scale is hard to deny. A model at 10 billion parameters can’t do a task at all. Same architecture at 100 billion, suddenly it blooms into something new.

Like a phase transition

water isn’t “kind of ice” at 1°C It’s still liquid. At 0°C it transforms into something structurally different.

The researchers call these emergent capabilities, which is a polite way of saying “we didn’t plan this and we’re not entirely sure why it happens.” This is why people like Andrej Karpathy openly say they don’t fully understand frontier models. Meanwhile the CEOs selling them have every incentive to amplify that mystique

A human didn’t code a reasoning module. The model needed to predict the next token in text that contained reasoning, so it built internal machinery that represents how reasoning works. Because that was the best strategy for getting the prediction right.

Once researchers realized these abilities were appearing, they started shaping the conditions that strengthen them:

  • curating training data with more reasoning-heavy text
  • fine-tuning on chain-of-thought examples that show working step by step,
  • using preference tuning / RLHF to reward clearer logic and more helpful outputs

The engineering in frontier models is more like gardening than architecture. They’re creating conditions for capabilities to grow stronger. They still can’t fully predict what will emerge next.

Looking Inside

So if nobody designed these capabilities, what’s actually happening in the weights?

This is the question that drives a field called “Mechanistic interpretability”

Here is a great blog post that helped me wrap my head around this

https://www.neelnanda.io/mechanistic-interpretability/glossary

 Researchers are opening the black box and tracing what happens inside. The model is just billions of numbers organized into layers. When text comes in, it flows through these layers and gets transformed at each step. Each layer is a giant grid of math operations. After training, nobody assigned roles to any of these. But when researchers started looking at what individual neurons and groups of neurons actually do, they found structure.

Think of it like a brain scan. You put a person in an MRI, show them a face, and a specific region lights up every time. Nobody wired that region to be “the face area.” It self-organized during development. But it’s real, consistent, and doing a specific job.

The same thing happens inside these models.

Take a sentence like “John gave the ball to Mary. What did Mary receive?”

To answer this, the model needs to figure out that

John is the giver and Mary is the receiver,

track that the ball is the object being transferred,

and connect “receive” back to “the ball.”

When researchers traced which weights activated during this task, they found consistent substructures distributed patterns of neurons that reliably participate in the same kind of computation. Not random activation but structured pathways that behave like circuits. One pattern identifies subject-object relationships and feeds into another that tracks the object, which feeds into another that resolves the reference. in reality it looks messier and more distributed than a clean pipeline diagram, but the functional structure is real and reproducible and visually noticeable

it’s a circuit that just naturally emerged due to Prediction pressure during training forcing the weights to self-organize into reliable pathways because language is full of patterns like this

And these smaller circuits compose combine and feed into complex circuits. Object-tracking feeds into reasoning feeds into analogy. It’s hierarchical self-organization layers of structure built on top of each other, none of it hand-designed.

Anthropic published research mapping millions of features inside their model.

Mapping the Mind of a Large Language Model Anthropic

https://thesephist.com/posts/prism/

Nomic Atlas (Visual Representation)

They found individual features that represent specific concepts. Not “neuron 4,517 does something vague” but “this feature activates for deception,” “this one activates for code,” “this one activates for the Golden Gate Bridge.” Mapped into clusters,

Related concepts group near each other like neighborhoods in a city. A concept like “inner conflict” sits near “balancing tradeoffs,” which sits near “opposing principles.” It looks like a galaxy map of meanings and ideas that nobody drew.

some models like DeepSeek (Mixture of Experts) take this further.

They didn’t just develop one set of circuits. They train many specialized sub-networks within a single model and route each input to the most relevant ones.

  • Ask it a coding question and one subset of weights fires.
  • Ask it a history question and a different subset activates.

The model self-organized not just circuits, but entire specialized regions and a traffic controller to direct inputs between them. Same principle, one level up.

Spirographs and fractals

This is where the overall concept it self clicked for me.

Strictly speaking, neural networks are not closed mathematical loops. Conceptually, however, a spirograph illustrates exactly how they operate:

🗨️ Simple operations, iterated across a massive space, producing complex structure that looks designed but emerged on its own.

A spirograph is one circle rolling around another. Dead simple rule. Keep going and we get intricate symmetry that feels intentional. Change one tiny thing like shifting the pen hole slightly off-center, change the radius and now we get a completely different pattern.

Training is like that: same architecture, same objective, small changes in data mix or learning rate can yield meaningfully different internal structure.

And like fractals, the deeper we look, the more structure we find. Researchers keep uncovering smaller, sharper circuits. The same motifs repeat at different scales. The interesting behavior lives right on the boundary between order and randomness.

It’s the same pattern we can see in nature: simple rules, iterated, producing shapes that look designed.

Closing out the loop

In school I used to draw circles over and over with a compass, watching patterns appear that I didn’t plan.

Years later,  I found myself messing around with Google’s DeepDream feeding images into a neural network and watching it project trippy, hallucinatory patterns back.

I thought I was making trippy images. What I was also seeing was the network’s internal pattern library being cranked to maximum.

The training objective is trivially simple “guess the next word”

But the internal machinery that emerges to get good at that objective ends up resembling understanding.

And “Resembles” is doing a lot of work there whether it’s true understanding, or an imitation so sophisticated the difference stops mattering in practice.

Or maybe it’s simpler than that. We trained it on patterns and concepts and texts created by organic brains which are themselves complex math engines. As a side effect, it took on the shape of the neurons that birthed it. Like DNA from mother and father forming how we look.

 Just like we see in mother nature “It’s fractals all the way down”