This post is part of a series.

  • Part 1: Speed vs. Skill
  • Part 2: AI and Elaboration: Which Coding Patterns Build Understanding? (this post)

This is the second post in “Developing with AI Through the Cognitive Lens,” a series exploring how AI coding tools affect the way programmers learn, work, and build expertise. Drawing on cognitive psychology research—particularly Felienne Hermans' work in The Programmer’s Brain—this series examines what happens to our skills when we delegate cognitive work to AI. The goal isn’t to reject AI, but to use it deliberately, making conscious choices about when it helps and when it hinders.

AI tools let you complete coding tasks without connecting new information to your existing mental models—a cognitive process known as elaboration that is crucial for building understanding. But some AI interaction patterns preserve this elaboration while others bypass it entirely. Let’s explore what elaboration is, why it helps with learning, and how we can use AI tools in a way that helps with this process rather than circumventing it.

Mental Models and Schemata

When an expert programmer encounters a for loop in code, something interesting happens in their mind. They don’t just see the syntax. Rather, they connect it to a rich network of knowledge: when loops are the right choice versus alternatives like map or reduce, performance implications for large datasets, common patterns like accumulation or filtering, and edge cases to watch for.

A novice looking at the same for loop, on the other hand, sees the syntax itself, and maybe the basic concept of iteration. The difference between experts and novices lies in what cognitive psychologists call mental models or schemata. These are organised structures of knowledge built up over time.

In her book The Programmer’s Brain, Felienne Hermans explores how these mental models form the foundation of programming expertise. Expert developers have accumulated years of these interconnected knowledge structures. When they read code, they’re not decoding syntax—they’re pattern matching against these rich mental models.

The path from novice to expert is all about progressively enriching these schemata, adding new patterns, creating connections between concepts, and organising knowledge so that it can be retrieved and applied effectively. Each programming task, each bug fixed, each design decision gradually builds the mental models that distinguish experts from beginners.

What Is Elaboration?

Understanding something new isn’t happening in isolation. A core element is connecting it to what you already know. Cognitive psychologists call this activity of linking new information to your existing mental models elaboration.

Consider learning about the map method on arrays. Without elaboration, you might learn: “The map method transforms arrays.” That’s not wrong, but it’s an isolated fact. Next week when you need to process an array, you probably won’t remember map or when to use it.

With elaboration, something different happens. You actively connect map to patterns you already understand: “Oh, this is like a for-loop that builds a new array. I’d use map when I’m transforming data. Unlike forEach, it returns a new array, which connects to immutability. And unlike a for-loop, it signals ‘transformation’ to whoever reads my code.”

Here, you’re asking questions and making comparisons: Why this over alternatives? How does this relate to what I know? When is this the right choice? This active processing builds bridges between new concepts and existing mental models.

This is how mental models grow from sparse to rich. Each new concept learned through elaboration doesn’t just add a fact, but integrates into the structure of what you know, creating connection points for future learning. Because we store knowledge in interconnected webs, elaboration is crucial for actually storing information in long-term memory and making it easier to retrieve.

Elaboration in Software Development

Software development presents many opportunities for elaboration. Exploring an unfamiliar API means comparing its design to libraries you’ve used before, noticing naming patterns, inferring the designers' mental model. If you learn about the Ring middleware concept in Clojure and compare it to Servlet filters, that’s elaboration, and it helps you integrate the new knowledge, making it easier to retrieve.

Debugging forces you to form hypotheses, test them against your mental model, and refine your understanding when reality doesn’t match. Realising “the filter is evaluated lazily” isn’t just fixing a bug, but deepening your understanding of how lazy sequences work in practice.

Even reading others' code involves elaboration when you predict what it will do, compare it to how you would have solved it, or question surprising choices. As Hermans notes, this active reasoning is how you build the mental models that distinguish experts from beginners.

Why Elaboration Requires Effort

Elaboration doesn’t happen automatically. The process of making connections, comparing alternatives, and thinking through implications requires considerable mental effort. Cognitive psychologists call this germane cognitive load: the productive effort your brain invests in building understanding. We’re going to explore cognitive load theory and its implications for AI-augmented software development in depth in the next post in this series.

The thing about AI coding tools is that they enable you to complete tasks without elaboration, getting working code while skipping the mental work that builds expertise. The next sections examine how different AI interaction patterns affect whether elaboration happens at all.

AI Interaction Patterns That Bypass Elaboration

Not all AI usage prevents elaboration, but certain interaction patterns make it easy to skip the mental work to a large extent or even entirely. Here are three common patterns where elaboration typically doesn’t happen or is greatly reduced.

Pattern 1: Autocomplete Without Examination

You’re typing code and a suggestion appears. Press Tab, keep moving. The patterns contained in the suggested snippet getsadded to your code, but not to your mental model.

When you accept a non-trivial suggestion consisting of a call to .map() without pausing to think “why map versus forEach versus a for-loop?”, you’ve skipped the comparison that would connect this new pattern to what you already know. The suggestion might be reasonable, but you haven’t done the cognitive work to understand why. Next time you face a similar problem, will you remember that map exists? Will you know when to choose it?

Pattern 2: Full Code Generation Without Engagement

Ask AI to “build feature X, end-to-end” and it will do just that. It will probably generate hundreds of lines of code in a dozen classes or so.

There are people who demand that developers need to loosen their control as a human-in-the-loop and need to learn to trust the agents. What these people mean is that you should stop doing careful, detailed code reviews.

When the generated code is hundreds of lines of code, even wanting to engage becomes difficult. There’s simply too much to process, too many decisions to trace back, too much context to absorb. But even small generations bypass elaboration if you don’t actively review them.

In both cases, being overwhelmed or deciding to skip detailed reviews, the result is the same: code that hopefully works, and understanding that never developed. You can’t explain why this approach over alternatives. You don’t know what trade-offs were made. The design decisions that would build your architectural judgement happened in the AI’s training data, not in your mind.

Pattern 3: Instant Debugging Answers

An error appears. You immediately ask AI to “fix the problem.” The AI provides a fix, you paste it in. Bug resolved.

But elaboration requires struggle. When you hand the diagnosis to AI, you skip hypothesis generation, systematic investigation, the reasoning that would build debugging intuition. You miss the chance to connect this failure mode to your mental model of how the system works.

The bug is fixed, but the debugging patterns aren’t learned. Next time a similar error appears, you’ll be back asking AI again, because the knowledge of how to diagnose it never made it into your long-term memory.

AI Interaction Patterns That Preserve Elaboration

Not all AI usage bypasses elaboration. Some interaction patterns actively support the mental work that builds understanding, or at least don’t nudge you to skip it.

The Navigator Pattern

The Omega Programming methodology, inspired by Extreme Programming’s pair programming practices, offers a pattern that preserves elaboration: you act as the navigator making decisions, while the AI acts as the driver implementing your instructions.

This preserves elaboration because you’re doing all the cognitive work. You make every design decision, so you must understand the problem and solution well enough to direct the AI clearly. You maintain the complete mental model of what’s being built.

This is fundamentally different from asking the AI to solve your problem. Instead of “build a REST endpoint,” you’re saying “create a GET endpoint at /api/users that returns JSON” and then “add error handling for database connection failures” and then “extract the database logic into a separate function.” Each instruction requires you to have already decided what needs to happen and why.

The Worked Example Pattern

Rather than asking AI to solve your specific problem, ask it for an example using a different domain. Study that example, extract the underlying pattern, then apply it to your actual problem yourself.

Say you need to calculate a seven-day moving average in your sales data. Instead of asking the AI to write that query, ask it to show you how window functions work in SQL Server with a concrete example. It might give you an employee salary example. You then do the cognitive work: extract the pattern (how OVER clauses and partitioning work), map from employees and salaries to your sales and dates, and generate your own solution.

This forces elaboration at every step. Instead of just accepting the solution provided by your AI assistant, you’re connecting the pattern to your existing knowledge, adapting it to your specific context, and constructing your own solution. Your mental model grows through this adaptation work.

We’ll explore this technique in depth in a future post.

Teaching Back to the AI

After learning something new or implementing a solution, explain your understanding to the AI. Ask it to probe your thinking with questions, acting as a Socratic tutor rather than an answer provider.

You might say: “I just implemented authentication using JWT tokens. Let me explain how it works in my system…” Then ask: “What questions should I be able to answer about this approach? Challenge my understanding.”

The AI might respond with foundational questions first: “What is the purpose of signing tokens? How does signature verification work?” Then move deeper: “If someone intercepts a token, what can they do with it? How are you handling token refresh? Have you considered the trade-offs compared to session-based authentication?”

This pattern preserves elaboration because explaining forces you to articulate the connections you’ve made. You can’t explain something clearly without understanding how the pieces fit together. When you stumble in your explanation, you’ve discovered a gap in your mental model. The AI’s questions push you to think deeper, to consider alternatives, to examine assumptions you hadn’t questioned.

You don’t truly understand something until you can explain it, and the AI’s probing questions help you discover what you don’t yet understand. Your mental model strengthens through this active dialogue—not from receiving explanations from the AI, but from articulating and defending your understanding.

Attempt, Then Verify

Try to solve the problem yourself first, even if your solution is incomplete or wrong. Only then ask the AI to explain the concept or verify your approach.

Even if you fail, you’ve already started building a mental model of the problem space. When the AI provides feedback, you’re comparing your thinking to another approach, which is elaboration through contrast. “Oh, I tried a for-loop but they used reduce—what’s the difference?” That comparison deepens your understanding in ways that passively receiving the AI’s solution never would.

Your mental model grows through this attempt-and-feedback loop, much like learning from code review or pair programming with a human colleague.

The Key Difference

These patterns share a common trait: they make you do the cognitive work while letting AI reduce friction. You maintain mental models, make decisions, create connections, articulate understanding. The AI helps by reducing typing, providing examples, offering feedback, or asking probing questions, but it doesn’t replace the thinking that builds expertise.

Why Bypassing Elaboration Feels Fine

The patterns that bypass elaboration feel fine in the moment. But this is the same perception gap we explored in the first post. Code works, task completes, and you feel productive.

However, mental models grow slowly, invisibly, so it’s very difficult to notice them not growing. Only months later, when you can’t solve a similar problem without AI, will it dawn on you that understanding has never developed. Elaboration is not a end in itself, it’s something that stakeholders can expect from us software developers, because we must be able to manipulate and enhance the system we have built. That’s what we are paid for. If we stop being able to do that, we are making ourselves dispensable.

Stack Overflow and the Value of Friction

Stack Overflow can bypass elaboration, too. If you copy-paste an answer without understanding it, you will learn nothing. But Stack Overflow has more friction than AI tools, and that friction often forces elaboration to happen.

When you search Stack Overflow, you typically see multiple answers to the same question. You must read through them, compare approaches, evaluate which one fits your specific context. The top answer might use a library you don’t have. The second answer might assume a different framework version. This comparison is elaboration.

Then you need to adapt the solution to your situation. Variable names don’t match. The example uses different data structures. You need to integrate it with your existing code. This adaptation requires understanding what the code actually does and why it works. You often can’t just paste it in. That modification work is elaboration.

These annoyances are actually elaboration opportunities. The friction that makes Stack Overflow feel slower than AI forces you to do the mental work that builds understanding.

AI tools remove that friction. The code appears in your editor, already adapted to your context, ready to run. There’s no comparison (one answer, not five). Depending on your prompts and context engineering, there is no adaptation required. Elaboration becomes entirely optional.

Interestingly, Hermans criticised Stack Overflow for other learning problems in The Programmer’s Brain, specifically around retrieval practice, which we are going to explore in a future post. But even with those concerns, Stack Overflow’s friction makes it harder to bypass elaboration entirely than AI tools do. AI makes the problem worse by removing the necessity of elaboration that Stack Overflow encourages.

The lesson isn’t “don’t use AI, use Stack Overflow instead.” Both can be used well or poorly. The lesson is recognising that friction isn’t always inefficiency. Sometimes it’s the cognitive work that builds expertise.

Guidelines for Preserving Elaboration

We’ve already explored four interaction patterns that preserve elaboration: the navigator pattern, worked examples, teaching back to the AI, and attempting before verifying. Beyond choosing these patterns, here are some specific practices that help ensure elaboration happens.

Pause Before Accepting

When autocomplete suggests code, resist the reflex to press Tab immediately. Ask yourself: “Why this pattern? What alternatives exist?” Compare the suggestion to approaches you already know. This brief pause is where elaboration can happen. You’re not slowing down to be inefficient, but giving your brain time to make connections.

Ask Questions Actively

Whether reviewing AI-generated code or examining a suggestion, scrutinise the choices: “How does this connect to patterns I know?” “When would I not use this?” These questions force explicit elaboration. The AI chose this solution for some reason. Make sure you understand what that reason is, and whether you agree with it.

Adapt, Don’t Accept Blindly

Before using AI-generated code, modify it. Change variable names to match your conventions. Restructure for clarity. Improve what you can. This adaptation isn’t busywork. If you can adapt code intelligently, you’ve understood it well enough to elaborate. If you can’t, you’ve discovered that understanding is shallow.

Conclusion

Elaboration is how programming expertise develops. Every connection you make, every comparison you draw, every “aha, this is like that other pattern” moment strengthens the knowledge structures that distinguish experts from beginners.

Some AI interaction patterns bypass this elaboration entirely. Accepting autocomplete suggestions without examination, merging or pushing generated code without a detailed review, asking for instant fixes—these usage modes let you complete tasks while your mental models remain unchanged or even deteriorate.

Other patterns preserve elaboration: directing the AI as a navigator, studying worked examples and adapting them, teaching your understanding back to the AI, attempting solutions before verifying. The difference comes down to who does the cognitive work.

The choice isn’t between “use AI” or “don’t use AI.” It’s between interaction patterns that preserve the mental work that builds expertise and patterns that eliminate it. Default patterns optimise for speed by skipping elaboration. Intentional patterns use AI to reduce friction while preserving the cognitive engagement that matters.

The next post will examine why elaboration requires effort, exploring cognitive load theory and how AI changes the composition of mental work. Later we’ll dive deeper into the worked example pattern and explore how AI affects retrieval practice and long-term memory.