The rush to automate everything with AI isn’t transformation. It’s just faster mediocrity. Here’s what the QE + AI corner of the triangle actually demands from quality leaders.
This is the third post in the Intelligent Quality Leadership series. It explores the QE + AI corner of the triangle, Cognitive Automation. If you’re new to the series, start with the original post introducing the model, then read the AI + Leadership deep dive on governance and guardrails.
Something is happening in quality engineering teams right now, and I want to name it plainly before we go any further.
Teams are shipping AI-generated test suites they don’t fully understand. They’re celebrating coverage metrics that tell them very little about actual risk. They’re pointing at cycle time improvements and calling it intelligence. And they’re doing all of this at speed, because speed is the thing being rewarded. The question nobody is asking loudly enough is: what, exactly, are we automating?
This is the question at the heart of Cognitive Automation, the corner of the Intelligent Quality Leadership triangle that sits at the intersection of QE and AI. It’s the corner I find most exciting, and also the one I think the industry is getting most dangerously wrong.
Let’s Start With an Uncomfortable Truth
Most of what we’re calling “AI-powered testing” today is not cognitive. It’s mechanical automation wearing a smarter mask. We’ve replaced brittle scripts with AI-generated brittle scripts. We’ve accelerated the production of tests that still don’t ask the right questions. We’ve given teams a tool that can write code faster than they can think, and then rewarded the speed rather than the thinking.
AI that amplifies a lack of craft produces low-quality output at remarkable velocity. That is not a competitive advantage. That is a liability that hasn’t surfaced yet.
I’ve seen both ends of this across my career. Environments where automation discipline was treated as a genuine craft, where the question was always “what should we automate, and why?” and environments where the goal was coverage at all costs. The second type didn’t improve when better tools arrived. They just got faster at accumulating technical debt they couldn’t see.
AI doesn’t change that dynamic. It accelerates it.
What Cognitive Automation Actually Means
The word “cognitive” is doing a lot of work in this model, and it’s worth being precise about what I mean by it.
Cognitive Automation is not about AI doing the testing. It’s about AI augmenting the quality of your thinking, the decisions you make about risk, coverage, strategy, and signal. It’s the difference between AI as a production tool and AI as a thinking partner. Between a tool that writes tests and a capability that helps you understand what to test, why, and what the output is actually telling you.
In practice, this manifests in a few ways that represent genuinely new territory for quality leaders:
Risk reasoning at scale AI can now help surface patterns of risk that a human analyst would take days to identify, correlating change frequency, historical defect density, code complexity, and coverage gaps into a prioritisation signal that teams can act on. That’s not AI doing QE. That’s AI giving QE leaders sharper information to make better calls with.
Exploratory thought partners Some of the most valuable things I’ve done with AI in a quality context have been conversations, not code generation. Challenging a test strategy. Pressure-testing assumptions about where the real failure modes are. Asking an AI to argue against my coverage decisions and seeing what it surfaces. That kind of adversarial, generative dialogue is genuinely useful, and it’s not something that fits neatly into a test framework.
Adaptive regression intelligence The blunt instrument of “run the full regression suite every time” is something most teams have already accepted as expensive and slow. AI-informed test selection, understanding which tests are most likely to be relevant given a specific change, is one of the areas where the value proposition is clearest and the risks are most manageable. But only if you understand the model well enough to know when to override it.
⚠️ The pattern I keep seeing
Teams adopt AI test generation, see output volume increase dramatically, interpret volume as value, reduce headcount, and then discover six months later that their test suite is a confidence theatre. Tests that pass reliably and catch very little.
The failure mode isn’t the AI. It’s the absence of a leader with enough craft to know the difference between a test suite that’s large and a test suite that’s good.
The Uncritical Adoption Problem
I want to sit with this for a moment, because I think it’s where the real risk lives for our profession right now.
The pressure on quality leaders to demonstrate AI adoption is real. It comes from above, boards and exec teams who’ve been told that AI will transform engineering productivity. It comes from peers, engineers who are already using AI tools and wonder why QE feels like it’s lagging. And it comes from within, from a genuine desire not to be left behind.
What that pressure produces, when it isn’t met with discipline, is adoption without strategy. Tools before thinking. Implementation before understanding.
I’ve been there. There’s a particular temptation when you first see a capable AI tool generate a full test suite for a complex feature in under a minute. It feels like magic. The instinct is to show it to people. To scale it. To declare the problem solved.
But the problem isn’t test generation speed. The problem was never test generation speed. The problem is always, and has always been, knowing what to build confidence in, and building the right kind of confidence. AI doesn’t solve that problem. You solve that problem. AI can help you do it faster and at greater scale, but only if you bring the judgment to direct it.
The most dangerous QE leader in 2026 isn’t the one who doesn’t use AI. It’s the one who uses AI without knowing enough to question what it produces.
Four Patterns That Distinguish Cognitive Automation
If you’re building a genuine Cognitive Automation capability rather than just AI-assisted test generation, these four patterns are what you’re aiming for:
01 — AI in the strategy layer, not just the execution layer You’re using AI to inform decisions about what to test, not just to produce tests. Coverage strategy, risk modelling, prioritisation — these are where AI earns its place.
02 — Human judgment remains in the loop on output quality Someone with enough craft to critique AI-generated tests is reviewing them. Not rubber-stamping them. Actually interrogating whether they’re asking the right questions.
03 — You can explain what your AI tooling is optimising for If you can’t articulate what objective your AI-powered tools are working toward and what their known failure modes are, you don’t have a strategy. You have a tool running unsupervised.
04 — The craft is being preserved and developed, not deprecated Your team is getting better at QE because of AI, not more dependent on it. Cognitive Automation raises the ceiling. It shouldn’t quietly lower the floor.
What This Requires of You as a Leader
This is where I want to be honest about the demand this places on quality leaders, including myself.
Cognitive Automation requires you to be technically credible enough to understand what AI tools are actually doing. Not at the level of a machine learning engineer, but enough to ask good questions, spot the failure modes, and know when the output deserves trust and when it warrants scrutiny. That’s a different kind of literacy than knowing which tools to buy.
It also requires you to hold the line on craft at exactly the moment when there’s pressure to let it slip. When the business is asking “can we reduce the testing headcount now that we have AI?”, the right answer almost never comes from the tool. It comes from a leader who understands the role that human judgment plays in the quality system, and can make that case clearly and credibly.
And it requires you to be intellectually honest about the difference between what AI is making faster and what it’s actually making better. Those are not always the same thing, and conflating them is one of the easiest ways to accidentally dismantle a quality culture you’ve spent years building.
💡 The Cognitive Automation Principle
AI should make your quality engineering more human, not less. It should free up the time and cognitive space that your people were spending on mechanical, low-judgment work and redirect that toward the things that genuinely require expertise: risk reasoning, exploratory thinking, system understanding, difficult conversations about quality with product and engineering leadership.
If your AI adoption is producing the opposite, if it’s removing the need for judgment rather than elevating where judgment is applied, then you are not doing Cognitive Automation. You are doing something else, and you should be worried about where it leads.
Three Questions to Ask Yourself Right Now
01. Can your team articulate why a specific AI-generated test exists? Not just what it tests, but what risk it’s addressing, what decision it supports, and what would be missed if it didn’t exist? If the answer is “we’re not sure, the AI generated it,” you have a coverage illusion, not a coverage strategy.
02. Where is human judgment currently applied in your AI-augmented pipeline, and is that the right place? Most teams apply oversight at the start (choosing the tool) and the end (reading the report). The most valuable place for judgment is usually in the middle, shaping what the AI is asked to do, and interrogating what it produces before it becomes signal.
03. Is AI making your team smarter or more comfortable? These feel similar but they’re not. Comfort comes from less friction in the existing workflow. Intelligence comes from seeing things you couldn’t see before, asking questions you weren’t asking, and having better conversations with engineering and product about risk. Which one is actually happening?
This Is the Work That Matters
I genuinely believe that Cognitive Automation represents one of the most significant opportunities quality engineering has had in a generation. Done well, it changes the conversation we have with the rest of the business. It moves us from “how many tests did we run?” to “what do we understand about the risk profile of this release that we didn’t understand yesterday?” That is a fundamentally different and more valuable question, and AI, used with discipline and craft, is what makes it answerable at speed.
But the opportunity is only available to the leaders who bring enough of themselves to the table. Who use AI as a thinking partner rather than a production tool. Who hold the line on craft while embracing the capability. Who are honest about what they don’t yet understand and committed to understanding it.
AI should make you more you, sharper, more strategic, more capable of the work that matters. If it’s making you less present, less curious, or less critical, something has gone wrong.
That’s worth pausing on.
I’d like to hear where you are on this. Is your team in the early stages of working out what Cognitive Automation actually means in practice? Have you seen the uncritical adoption pattern play out, and what did it cost? Or are you further along, and starting to see what genuinely augmented quality engineering looks like? The conversation in the community around this is still forming, and your experience is more useful than any framework I could offer.
The next post in the Intelligent Quality Leadership series goes deeper on the third corner of the triangle: Leadership + QE = Quality Culture, the cultural transformation that has to accompany everything we’ve discussed here, because without it, the tools and the strategy don’t stick.
