Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

AI, MCP, and the Hidden Costs of Data Hoarding

15 December 2025 at 08:15

The Model Context Protocol (MCP) is genuinely useful. It gives people who develop AI tools a standardized way to call functions and access data from external systems. Instead of building custom integrations for each data source, you can expose databases, APIs, and internal tools through a common protocol that any AI can understand.

However, I’ve been watching teams adopt MCP over the past year, and I’m seeing a disturbing pattern. Developers are using MCP to quickly connect their AI assistants to every data source they can find—customer databases, support tickets, internal APIs, document stores—and dumping it all into the AI’s context. And because the AI is smart enough to sort through a massive blob of data and pick out the parts that are relevant, it all just works! Which, counterintuitively, is actually a problem. The AI cheerfully processes massive amounts of data and produces reasonable answers, so nobody even thinks to question the approach.

This is data hoarding. And like physical hoarders who can’t throw anything away until their homes become so cluttered they’re unliveable, data hoarding has the potential to cause serious problems for our teams. Developers learn they can fetch far more data than the AI needs and provide it with little planning or structure, and the AI is smart enough to deal with it and still give good results.

When connecting a new data source takes hours instead of days, many developers don’t take the time to ask what data actually belongs in the context. That’s how you end up with systems that are expensive to run and impossible to debug, while an entire cohort of developers misses the chance to learn the critical data architecture skills they need to build robust and maintainable applications.

How Teams Learn to Hoard

Anthropic released MCP in late 2024 to give developers a universal way to connect AI assistants to their data. Instead of maintaining separate code for connectors to let AI access data from, say, S3, OneDrive, Jira, ServiceNow, and your internal DBs and APIs, you use the same simple protocol to provide the AI with all sorts of data to include in its context. It quickly gained traction. Companies like Block and Apollo adopted it, and teams everywhere started using it. The promise is real; in many cases, the work of connecting data sources to AI agents that used to take weeks can now take minutes. But that speed can come at a cost.

Let’s start with an example: a small team working on an AI tool that reads customer support tickets, categorizes them by urgency, suggests responses, and routes them to the right department. They needed to get something working quickly but faced a challenge: They had customer data spread across multiple systems. After spending a morning arguing about what data to pull, which fields were necessary, and how to structure the integration, one developer decided to just build it, creating a single getCustomerData(customerId) MCP tool that pulls everything they’d discussed—40 fields from three different systems—into one big response object. To the team’s relief, it worked! The AI happily consumed all 40 fields and started answering questions, and no more discussions or decisions were needed. The AI handled all the new data just fine, and everyone felt like the project was on the right track.

Day two, someone added order history so the assistant could explain refunds. Soon the tool pulled Zendesk status, CRM status, eligibility flags that contradicted each other, three different name fields, four timestamps for “last seen,” plus entire conversation threads, and combined them all into an ever-growing data object.

The assistant kept producing reasonable-looking answers, even as the data it ingested kept growing in scale. However, the model now had to wade through thousands of irrelevant tokens before answering simple questions like “Is this customer eligible for a refund?” The team ended up with a data architecture that buried the signal in noise. That additional load put stress on the AI to dig out that signal, leading to serious potential long-term problems. But they didn’t realize it yet, because the AI kept producing reasonable-looking answers. As they added more data sources over the following weeks, the AI started taking longer to respond. Hallucinations crept in that they couldn’t track down to any specific data source. What had been a really valuable tool became a bear to maintain.

The team had fallen into the data hoarding trap: Their early quick wins created a culture where people just threw whatever they needed into the context, and eventually it grew into a maintenance nightmare that only got worse as they added more data sources.

The Skills That Never Develop

There are as many opinions on data architecture as there are developers, and there are usually many ways to solve any one problem. One thing that almost everyone agrees on is that it takes careful choices and lots of experience. But it’s also the subject of lots of debate, especially within teams, precisely because there are so many ways to design how your application stores, transmits, encodes, and uses data.

Most of us fall into just-in-case thinking at one time or another, especially early in our careers—pulling all the data we might possibly need just in case we need it rather than fetching only what we need when we actually need it (which is an example of the opposite, just-in-time thinking). Normally when we’re designing our data architecture, we’re dealing with immediate constraints: ease of access, size, indexing, performance, network latency, and memory usage. But when we use MCP to provide data to an AI, we can often sidestep many of those trade-offs…temporarily.

The more we work with data, the better we get at designing how our apps use it. The more early-career developers are exposed to it, the more they learn through experience why, for example, System A should own customer status while System B owns payment history. Healthy debate is an important part of this learning process. Through all of these experiences, we develop an intuition for what “too much data” looks like—and how to handle all of those tricky but critical trade-offs that create friction throughout our projects.

MCP can remove the friction that comes from those trade-offs by letting us avoid having to make those decisions at all. If a developer can wire up everything in just a few minutes, there’s no need for discussion or debate about what’s actually needed. The AI seems to handle whatever data you throw at it, so the code ships without anyone questioning the design.

Without all of that experience making, discussing, and debating data design choices, developers miss the chance to build critical mental models about data ownership, system boundaries, and the cost of moving unnecessary data around. They spend their formative years connecting instead of architecting. This is another example of what I call the cognitive shortcut paradox—AI tools that make development easier can prevent developers from building the very skills they need to use those tools effectively. Developers who rely solely on MCP to handle messy data never learn to recognize when data architecture is problematic, just like developers who rely solely on tools like Copilot or Claude Code to generate code never learn to debug what it creates.

The Hidden Costs of Data Hoarding

Teams use MCP because it works. Many teams carefully plan their MCP data architecture, and even teams that do fall into the data hoarding trap still ship successful products. But MCP is still relatively new, and the hidden costs of data hoarding take time to surface.

Teams often don’t discover the problems with a data hoarding approach until they need to scale their applications. That bloated context that barely registered as a cost for your first hundred queries starts showing up as a real line item in your cloud bill when you’re handling millions of requests. Every unnecessary field you’re passing to the AI adds up, and you’re paying for all that redundant data on every single AI call.

Any developer who’s dealt with tightly coupled classes knows that when something goes wrong—and it always does, eventually—it’s a lot harder to debug. You often end up dealing with shotgun surgery, that really unpleasant situation where fixing one small problem requires changes that cascade across multiple parts of your codebase. Hoarded data creates the same kind of technical debt in your AI systems: When the AI gives a wrong answer, tracking down which field it used or why it trusted one system over another is difficult, often impossible.

There’s also a security dimension to data hoarding that teams often miss. Every piece of data you expose through an MCP tool is a potential vulnerability. If an attacker finds an unprotected endpoint, they can pull everything that tool provides. If you’re hoarding data, that’s your entire customer database instead of just the three fields actually needed for the task. Teams that fall into the data hoarding trap find themselves violating the principle of least privilege: Applications should have access to the data they need, but no more. That can bring an enormous security risk to their whole organization.

In an extreme case of data hoarding infecting an entire company, you might discover that every team in your organization is building their own blob. Support has one version of customer data, sales has another, product has a third. The same customer looks completely different depending on which AI assistant you ask. New teams come along, see what appears to be working, and copy the pattern. Now you’ve got data hoarding as organizational culture.

Each team thought they were being pragmatic, shipping fast, and avoiding unnecessary arguments about data architecture. But the hoarding pattern spreads through an organization the same way technical debt spreads through a codebase. It starts small and manageable. Before you know it, it’s everywhere.

Practical Tools for Avoiding the Data Hoarding Trap

It can be really difficult to coach a team away from data hoarding when they’ve never experienced the problems it causes. Developers are very practical—they want to see evidence of problems and aren’t going to sit through abstract discussions about data ownership and system boundaries when everything they’ve done so far has worked just fine.

In Learning Agile, Jennifer Greene and I wrote about how teams resist change because they know that what they’re doing today works. To the person trying to get developers to change, it may seem like irrational resistance, but it’s actually pretty rational to push back against someone from the outside telling them to throw out what works today for something unproven. But just like developers eventually learn that taking time for refactoring speeds them up in the long run, teams need to learn the same lesson about deliberate data design in their MCP tools.

Here are some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in:

  • Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.
  • Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.
  • Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.
  • Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.

Quick smell test for data hoarding

  • Tool names are nouns (getCustomer()) instead of verbs (checkEligibility()).
  • Nobody’s ever asked, “Do we really need all these fields?”
  • You can’t tell which system owns which piece of data.
  • Debugging requires detective work across multiple data sources.
  • Your team rarely or never discusses the data design of MCP tools before building them.

Looking Forward

MCP is a simple but powerful tool with enormous potential for teams. But because it can be a critically important pillar of your entire application architecture, problems you introduce at the MCP level ripple throughout your project. Small mistakes have huge consequences down the road.

The very simplicity of MCP encourages data hoarding. It’s an easy trap to fall into, even for experienced developers. But what worries me most is that developers learning with these tools right now might never learn why data hoarding is a problem, and they won’t develop the architectural judgment that comes from having to make hard choices about data boundaries. Our job, especially as leaders and senior engineers, is to help everyone avoid the data hoarding trap.

When you treat MCP decisions with the same care you give any core interface—keeping context lean, setting boundaries, revisiting them as you learn—MCP stays what it should be: a simple, reliable bridge between your AI and the systems that power it.

AI Is Reshaping Developer Career Paths

22 October 2025 at 07:14

This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the original framework introduction and explore the complete methodology in Andrew Stellman’s O’Reilly report Critical Thinking Habits for Coding with AI.

A few decades ago, I worked with a developer who was respected by everyone on our team. Much of that respect came from the fact that he kept adopting new technologies that none of us had worked with. There was a cutting-edge language at the time that few people were using, and he built an entire feature with it. He quickly became known as the person you’d go to for these niche technologies, and it earned him a lot of respect from the rest of the team.

Years later, I worked with another developer who went out of his way to incorporate specific, obscure .NET libraries into his code. That too got him recognition from our team members and managers, and he was viewed as a senior developer in part because of his expertise with these specialized tools.

Both developers built their reputations on deep knowledge of specific technologies. It was a reliable career strategy that worked for decades: Become the expert in something valuable but not widely known, and you’d have authority on your team and an edge in job interviews.

But AI is changing that dynamic in ways we’re just starting to see.

In the past, experienced developers could build deep expertise in a single technology (like Rails or React, for example) and that expertise would consistently get them recognition on their team and help them stand out in reviews and job interviews. It used to take months or years of working with a specific framework before a developer could write idiomatic code, or code that follows the accepted patterns and best practices of that technology.

But now AI models are trained on countless examples of idiomatic code, so developers without that experience can generate similar code immediately. That puts less of a premium on the time spent developing that deep expertise.

The Shift Toward Generalist Skills

That change is reshaping career paths in ways we’re just starting to see. The traditional approach worked for decades, but as AI fills in more of that specialized knowledge, the career advantage is shifting toward people who can integrate across systems and spot design problems early.

As I’ve trained developers and teams who are increasingly adopting AI coding tools, I’ve noticed that the developers who adapt best aren’t always the ones with the deepest expertise in a specific framework. Rather, they’re the ones who can spot when something looks wrong, integrate across different systems, and recognize patterns. Most importantly, they can apply those skills even when they’re not deep experts in the particular technology they’re working with.

This represents a shift from the more traditional dynamic on teams, where being an expert in a specific technology (like being the “Rails person” or the “React expert” on the team) carried real authority. AI now fills in much of that specialized knowledge. You can still build a career on deep Rails knowledge, but thanks to AI, it doesn’t always carry the same authority on a team that it once did.

What AI Still Can’t Do

Both new and experienced developers routinely find themselves accumulating technical debt, especially when deadlines push delivery over maintainability, and this is an area where experienced engineers often distinguish themselves, even on a team with wide AI adoption. The key difference is that an experienced developer often knows they’re taking on debt. They can spot antipatterns early because they’ve seen them repeatedly and take steps to “pay off” the debt before it gets much more expensive to fix.

But AI is also changing the game for experienced developers in ways that go beyond technical debt management, and it’s starting to reshape their traditional career paths. What AI still can’t do is tell you when a design or architecture decision today will cause problems six months from now, or when you’re writing code that doesn’t actually solve the user’s problem. That’s why being a generalist, with skills in architecture, design patterns, requirements analysis, and even project management, is becoming more valuable on software teams.

Many developers I see thriving with AI tools are the ones who can:

  • Recognize when generated code will create maintenance problems even if it works initially
  • Integrate across multiple systems without being deep experts in each one
  • Spot architectural patterns and antipatterns regardless of the specific technology
  • Frame problems clearly so AI can generate more useful solutions
  • Question and refine AI output rather than accepting it as is

Practical Implications for Your Career

This shift has real implications for how developers think about career development:

For experienced developers: Your years of expertise are still important and valuable, but the career advantage is shifting from “I know this specific tool really well” to “I can solve complex problems across different technologies.” Focus on building skills in system design, integration, and pattern recognition that apply broadly.

For early-career developers: The temptation might be to rely on AI to fill knowledge gaps, but this can be dangerous. Those broader skills—architecture, design judgment, problem-solving across domains—typically require years of hands-on experience to develop. Use AI as a tool, but make sure you’re still building the fundamental thinking skills that let you guide it effectively.

For teams: Look for people who can adapt to new technologies quickly and integrate across systems, not just deep specialists. The “Rails person” might still be valuable, but the person who can work with Rails, integrate it with three other systems, and spot when the architecture is heading for trouble six months down the line is becoming more valuable.

The developers who succeed in an AI-enabled world won’t always be the ones who know the most about any single technology. They’ll be the ones who can see the bigger picture, integrate across systems, and use AI as a powerful tool while maintaining the critical thinking necessary to guide it toward genuinely useful solutions.

AI isn’t replacing developers. It’s changing what kinds of developer skills matter most.

From Habits to Tools

15 October 2025 at 08:49

This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the original framework introduction and explore the complete methodology in Andrew Stellman’s O’Reilly report Critical Thinking Habits for Coding with AI.

AI-assisted coding is here to stay. I’ve seen many companies now require all developers to install Copilot extensions in their IDEs, and teams are increasingly being measured on AI-adoption metrics. Meanwhile, the tools themselves have become genuinely useful for routine tasks: Developers regularly use them to generate boilerplate, convert between formats, write unit tests, and explore unfamiliar APIs—giving us more time to focus on solving our real problems instead of wrestling with syntax or going down research rabbit holes.

Many team leads, managers, and instructors looking to help developers ramp up on AI tools assume the biggest challenge is learning to write better prompts or picking the right AI tool; that assumption misses the point. The real challenge is figuring out how developers can use these tools in ways that keep them engaged and strengthen their skills instead of becoming disconnected from the code and letting their development skills atrophy.

This was the challenge I took on when I developed the Sens-AI Framework. When I was updating Head First C# (O’Reilly 2024) to help readers ramp up on AI skills alongside other fundamental development skills, I watched new learners struggle not with the mechanics of prompting but with maintaining their understanding of the code they were producing. The framework emerged from those observations—five habits that keep developers engaged in the design conversation: context, research, framing, refining, and critical thinking. These habits address the real issue: making sure the developer stays in control of the work, understanding not just what the code does but why it’s structured that way.

What We’ve Learned So Far

When I updated Head First C# to include AI exercises, I had to design them knowing learners would paste instructions directly into AI tools. That forced me to be deliberate: The instructions had to guide the learner while also shaping how the AI responded. Testing those same exercises against Copilot and ChatGPT showed the same kinds of problems over and over—AI filling in gaps with the wrong assumptions or producing code that looked fine until you actually had to run it, read and understand it, or modify and extend it.

Those issues don’t only trip up new learners. More experienced developers can fall for them too. The difference is that experienced developers already have habits for catching themselves, while newer developers usually don’t—unless we make a point of teaching them. AI skills aren’t exclusive to senior or experienced developers either; I’ve seen relatively new developers develop their AI skills quickly because they’ve built these habits quickly.

Habits Across the Lifecycle

In “The Sens-AI Framework,” I introduced the five habits and explained how they work together to keep developers engaged with their code rather than becoming passive consumers of AI output. These habits also address specific failure modes, and understanding how they solve real problems points the way toward broader implementation across teams and tools:

Context helps avoid vague prompts that lead to poor output. Ask an AI to “make this code better” without sharing what the code does, and it might suggest adding comments to a performance-critical section where comments would just clutter. But provide the context—“This is a high-frequency trading system where microseconds matter,” along with the actual code structure, dependencies, and constraints—and the AI understands it should focus on optimizations, not documentation.

Research makes sure the AI isn’t your only source of truth. When you rely solely on AI, you risk compounding errors—the AI makes an assumption, you build on it, and soon you’re deep in a solution that doesn’t match reality. Cross-checking with documentation or even asking a different AI can reveal when you’re being led astray.

Framing is about asking questions that set up useful answers. “How do I handle errors?” gets you a try-catch block. “How do I handle network timeout errors in a distributed system where partial failures need rollback?” gets you circuit breakers and compensation patterns. As I showed in “Understanding the Rehash Loop,” proper framing can break the AI out of circular suggestions.

Refining means not settling for the first thing the AI gives you. The first response is rarely the best—it’s just the AI’s initial attempt. When you iterate, you’re steering toward better patterns. Refining moves you from “This works” to “This is actually good.”

Critical thinking ties it all together, asking whether the code actually works for your project. It’s debugging the AI’s assumptions, reviewing for maintainability, and asking, “Will this make sense six months from now?”

The real power of the Sens-AI Framework comes from using all five habits together. They form a reinforcing loop: Context informs research, research improves framing, framing guides refinement, refinement reveals what needs critical thinking, and critical thinking shows you what context you were missing. When developers use these habits in combination, they stay engaged with the design and engineering process rather than becoming passive consumers of AI output. It’s the difference between using AI as a crutch and using it as a genuine collaborator.

Where We Go from Here

If developers are going to succeed with AI, these habits need to show up beyond individual workflows. They need to become part of:

Education: Teaching AI literacy alongside basic coding skills. As I described in “The AI Teaching Toolkit,” techniques like having learners debug intentionally flawed AI output help them spot when the AI is confidently wrong and practice breaking out of rehash loops. These aren’t advanced skills; they’re foundational.

Team practice: Using code reviews, pairing, and retrospectives to evaluate AI output the same way we evaluate human-written code. In my teaching article, I described techniques like AI archaeology and shared language patterns. What matters here is making those kinds of habits part of standard training—so teams develop vocabulary like “I’m stuck in a rehash loop” or “The AI keeps defaulting to the old pattern.” And as I explored in “Trust but Verify,” treating AI-generated code with the same scrutiny as human code is essential for maintaining quality.

Tooling: IDEs and linters that don’t just generate code but highlight assumptions and surface design trade-offs. Imagine your IDE warning: “Possible rehash loop detected: you’ve been iterating on this same approach for 15 minutes.” That’s one direction IDEs need to evolve—surfacing assumptions and warning when you’re stuck. The technical debt risks I outlined in “Building AI-Resistant Technical Debt” could be mitigated with better tooling that catches antipatterns early.

Culture: A shared understanding that AI is a collaboration too (and not a teammate). A team’s measure of success for code shouldn’t revolve around AI. Teams still need to understand that code, keep it maintainable, and grow their own skills along the way. Getting there will require changes in how they work together—for example, adding AI-specific checks to code reviews or developing shared vocabulary for when AI output starts drifting. This cultural shift connects to the requirements engineering parallels I explored in “Prompt Engineering Is Requirements Engineering”—we need the same clarity and shared understanding with AI that we’ve always needed with human teams.

More convincing output will require more sophisticated evaluation. Models will keep getting faster and more capable. What won’t change is the need for developers to think critically about the code in front of them.

The Sens-AI habits work alongside today’s tools and are designed to stay relevant to tomorrow’s tools as well. They’re practices that keep developers in control, even as models improve and the output gets harder to question. The framework gives teams a way to talk about both the successes and the failures they see when using AI. From there, it’s up to instructors, tool builders, and team leads to decide how to put those lessons into practice.

The next generation of developers will never know coding without AI. Our job is to make sure they build lasting engineering habits alongside these tools—so AI strengthens their craft rather than hollowing it out.

The AI Teaching Toolkit: Practical Guidance for Teams

8 October 2025 at 07:12

This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the original framework introduction and explore the complete methodology in Andrew Stellman’s O’Reilly report Critical Thinking Habits for Coding with AI.

Teaching developers to work effectively with AI means building habits that keep critical thinking active while leveraging AI’s speed.

But teaching these habits isn’t straightforward. Instructors and team leads often find themselves needing to guide developers through challenges in ways that build confidence rather than short-circuit their growth. (See “The Cognitive Shortcut Paradox.”) There are the regular challenges of working with AI:

  • Suggestions that look correct while hiding subtle flaws
  • Less experienced developers accepting output without questioning it
  • AI producing patterns that don’t match the team’s standards
  • Code that works but creates long-term maintainability headaches

The Sens-AI Framework (see “The Sens-AI Framework: Teaching Developers to Think with AI”) was built to address these problems. It focuses on five habits—context, research, framing, refining, and critical thinking—that help developers use AI effectively while keeping learning and design judgment in the loop.

This toolkit builds on and reinforces those habits by giving you concrete ways to integrate them into team practices. It’s designed to give you concrete ways to build these habits in your team, whether you’re running a workshop, leading code reviews, or mentoring individual developers. The techniques that follow include practical teaching strategies, common pitfalls to avoid, reflective questions to deepen learning, and positive signs that show the habits are sticking.

Advice for Instructors and Team Leads

The strategies in this toolkit can be used in classrooms, review meetings, design discussions, or one-on-one mentoring. They’re meant to help new learners, experienced developers, and teams have more open conversations about design decisions, context, and the quality of AI suggestions. The focus is on making review and questioning feel like a normal, expected part of everyday development.

Discuss assumptions and context explicitly. In code reviews or mentoring sessions, ask developers to talk about occurrences when the AI gave them poor out unexpected results. Also try asking them to explain what they think the AI might have needed to know to produce a better answer, and where it might have filled in gaps incorrectly. Getting developers to articulate those assumptions helps spot weak points in design before they’re cemented into the code. (See “Prompt Engineering Is Requirements Engineering.”)

Encourage pairing or small-group prompt reviews: Make AI-assisted development collaborative, not siloed. Have developers on a team or students in a class share their prompts with each other, and talk through why they wrote them a certain way, just like they’d talk through design decisions in pair or mob programming. This helps less experienced developers see how others approach framing and refining prompts.

Encourage researching idiomatic use of code. One thing that often holds back intermediate developers is not knowing the idioms of a specific framework or language. AI can help here—if they ask for the idiomatic way to do something, they see not just the syntax but also the patterns experienced developers rely on. That shortcut can speed up their understanding and make them more confident when working with new technologies.

Here are two examples of how using AI to research idioms can help developers quickly adapt:

  • A developer with deep experience writing microservices but little exposure to Spring Boot can use AI to see the idiomatic way to annotate a class with @RestController and @RequestMapping. They might also learn that Spring Boot favors constructor injection over field injection with @Autowired, or that @GetMapping("/users") is preferred over @RequestMapping(method = RequestMethod.GET, value = "/users").
  • A Java developer new to Scala might reach for null instead of Scala’s Option types—missing a core part of the language’s design. Asking the AI for the idiomatic approach surfaces not just the syntax but the philosophy behind it, guiding developers toward safer and more natural patterns.

Help developers recognize rehash loops as meaningful signals. When the AI keeps circling the same broken idea, even developers who have experienced this many times may not realize they’re caught in a rehash loop. Teach them to recognize the loop as a signal that the AI has exhausted its context, and that it’s time to step back. That pause can lead to research, reframing the problem, or providing new information. For example, you might stop and say: “Notice how it’s circling the same idea? That’s our signal to break out.” Then demonstrate how to reset: open a new session, consult documentation, or try a narrower prompt. (See “Understanding the Rehash Loop.”)

Research beyond AI. Help developers learn that when hitting walls, they don’t need to just tweak prompts endlessly. Model the habit of branching out: check official documentation, search Stack Overflow, or review similar patterns in your existing codebase. AI should be one tool among many. Showing developers how to diversify their research keeps them from looping and builds stronger problem-solving instincts.

Use failed projects as test cases. Bring in previous projects that ran into trouble with AI-generated code and revisit them with Sens-AI habits. Review what went right and wrong, talk about where it might have helped to break out of the vibe coding loop to do additional research, reframe the problem, and apply critical thinking. Work with the team to write down lessons you learned from the discussion. Holding a retrospective exercise like this lowers the stakes—developers are free to experiment and critique without slowing down current work. It’s also a powerful way to show how reframing, refining, and verifying could have prevented past issues. (See “Building AI-Resistant Technical Debt.”)

Make refactoring part of the exercise. Help developers avoid the habit of deciding the code is finished when it runs and seems to work. Have them work with the AI to clean up variable names, reduce duplication, simplify overly complex logic, apply design patterns, and find other ways to prevent technical debt. By making evaluation and improvement explicit, you can help developers build the muscle memory that prevents passive acceptance of AI output. (See “Trust but Verify.”)

Common Pitfalls to Address with Teams

Even with good intentions, teams often fall into predictable traps. Watch for these patterns and address them explicitly, because otherwise they can slow progress and mask real learning.

The completionist trap: Trying to read every line of AI output even when you’re about to regenerate it. Teach developers it’s okay to skim, spot problems, and regenerate early. This helps them avoid wasting time carefully reviewing code they’ll never use, and reduces the risk of cognitive overload. The key is to balance thoroughness with pragmatism—they can start to learn when detail matters and when speed matters more.

The perfection loop: Endless tweaking of prompts for marginal improvements. Try setting a limit on iteration—for example, if refining a prompt doesn’t get good results after three or four attempts, it’s time to step back and rethink. Developers need to learn that diminishing returns are a sign to change strategy, not to keep grinding, so energy that should go toward solving the problem doesn’t get lost in chasing minor refinements.

Context dumping: Pasting entire codebases into prompts. Teach scoping—What’s the minimum context needed for this specific problem? Help them anticipate what the AI needs, and provide the minimal context required to solve each problem. Context dumping can be especially problematic with limited context windows, where the AI literally can’t see all the code you’ve pasted, leading to incomplete or contradictory suggestions. Teaching developers to be intentional about scope prevents confusion and makes AI output more reliable.

Skipping the fundamentals: Using AI for extensive code generation before understanding basic software development concepts and patterns. Ensure learners can solve simple development problems on their own (without the help of AI) before accelerating with AI on more complex ones. This helps reduce the risk of developers building a shallow knowledge platform that collapses under pressure. Fundamentals are what allow them to evaluate AI’s output critically rather than blindly trusting it.

AI Archaeology: A Practical Team Exercise for Better Judgment

Have your team do an AI archaeology exercise. Take a piece of AI-generated code from the previous week and analyze it together. More complex or nontrivial code samples work especially well because they tend to surface more assumptions and patterns worth discussing.

Have each team member independently write down their own answers to these questions:

  • What assumptions did the AI make?
  • What patterns did it use?
  • Did it make the right decision for our codebase?
  • How would you refactor or simplify this code if you had to maintain it long-term?

Once everyone has had time to write, bring the group back together—either in a room or virtually—and compare answers. Look for points of agreement and disagreement. When different developers spot different issues, that contrast can spark discussion about standards, best practices, and hidden dependencies. Encourage the group to debate respectfully, with an emphasis on surfacing reasoning rather than just labeling answers as right or wrong.

This exercise makes developers slow down and compare perspectives, which helps surface hidden assumptions and coding habits. By putting everyone’s observations side by side, the team builds a shared sense of what good AI-assisted code looks like.

For example, the team might discover the AI consistently uses older patterns your team has moved away from or that it defaults to verbose solutions when simpler ones exist. Discoveries like that become teaching moments about your team’s standards and help calibrate everyone’s “code smell” detection for AI output. The retrospective format makes the whole exercise more friendly and less intimidating than real-time critique, which helps to strengthen everyone’s judgment over time.

Signs of Success

Balancing pitfalls with positive indicators helps teams see what good AI practice looks like. When these habits take hold, you’ll notice developers:

Reviewing AI code with the same rigor as human-written code—but only when appropriate. When developers stop saying “the AI wrote it, so it must be fine” and start giving AI code the same scrutiny they’d give a teammate’s pull request, it demonstrates that the habits are sticking.

Exploring multiple approaches instead of accepting the first answer. Developers who use AI effectively don’t settle for the initial response. They ask the AI to generate alternatives, compare them, and use that exploration to deepen their understanding of the problem.

Recognizing rehash loops without frustration. Instead of endlessly tweaking prompts, developers treat rehash loops as signals to pause and rethink. This shows they’re learning to manage AI’s limitations rather than fight against them.

Sharing “AI gotchas” with teammates. Developers start saying things like “I noticed Copilot always tries this approach, but here’s why it doesn’t work in our codebase.” These small observations become collective knowledge that helps the whole team work together and with AI more effectively.

Asking “Why did the AI choose this pattern?” instead of just asking “Does it work?” This subtle shift shows developers are moving beyond surface correctness to reasoning about design. It’s a clear sign that critical thinking is active.

Bringing fundamentals into AI conversations: Developers who are working positively with AI tools tend to relate AI output back to core principles like readability, separation of concerns, or testability. This shows they’re not letting AI bypass their grounding in software engineering.

Treating AI failures as learning opportunities: When something goes wrong, instead of blaming the AI or themselves, developers dig into why. Was it context? Framing? A fundamental limitation? This investigative mindset turns problems into teachable moments.

Reflective Questions for Teams

Encourage developers to ask themselves these reflective questions periodically. They slow the process just enough to surface assumptions and spark discussion. You might use them in training, pairing sessions, or code reviews to prompt developers to explain their reasoning. The goal is to keep the design conversation active, even when the AI seems to offer quick answers.

  • What does the AI need to know to do this well? (Ask this before writing any prompt.)
  • What context or requirements might be missing here? (Helps catch gaps early.)
  • Do you need to pause here and do some research? (Promotes branching out beyond AI.)
  • How might you reframe this problem more clearly for the AI? (Encourages clarity in prompts.)
  • What assumptions are you making about this AI output? (Surfaces hidden design risks.)
  • If you’re getting frustrated, is that a signal to step back and rethink? (Normalizes stepping away.)
  • Would it help to switch from reading code to writing tests to check behavior? (Shifts the lens to validation.)
  • Do these unit tests reveal any design issues or hidden dependencies? (Connects testing with design insight.)
  • Have you tried starting a new chat session or using a different AI tool for this research? (Models flexibility with tools.)

The goal of this toolkit is to help developers build the kind of judgment that keeps them confident with AI while still growing their core skills. When teams learn to pause, review, and refactor AI-generated code, they move quickly without losing sight of design clarity or long-term maintainability. These teaching strategies give developers the habits to stay in control of the process, learn more deeply from the work, and treat AI as a true collaborator in building better software. As AI tools evolve, these fundamental habits—questioning, verifying, and maintaining design judgment—will remain the difference between teams that use AI well and those that get used by it.

❌
❌