AI hasn’t solved the quality culture problem. It’s made it more complicated. When everyone can generate tests with a prompt, the question of who actually owns quality becomes harder to answer, not easier.


This is the fourth post in the Intelligent Quality Leadership series. It explores the Leadership + QE corner of the triangle, where quality culture lives. If you’re new to the series, it’s worth starting at the beginning.


I’ve been talking about quality culture for a long time. Long enough to have written a model for it back in 2020, presented it at conferences, and watched it get tested against reality in organisations of very different shapes and sizes. The core of it still holds: quality culture is built through shared ownership, deliberate advocacy, and the kind of patient relationship-building that doesn’t show up in a sprint velocity metric.

What I didn’t account for in 2020 was AI. Not because it wasn’t relevant, but because what we have now is a fundamentally different proposition. And I think it changes the culture problem in ways most quality leaders haven’t fully reckoned with yet.

Here’s the uncomfortable version: AI has made shared ownership of quality harder, not easier. And if you think the opposite, I’d ask you to look more carefully at what’s actually happening in your teams.


The Democratisation Illusion

The optimistic narrative goes something like this. AI puts testing capability in the hands of everyone. Developers can generate test cases. Product managers can query quality metrics in plain language. The barriers to quality participation are lower than they’ve ever been. Therefore, shared ownership should be easier to achieve.

I understand why that argument is appealing. Parts of it are even true. But it mistakes access for accountability, and that’s a meaningful difference.

When a developer generates a test suite using an AI tool, they have accessed a quality capability. They have not necessarily taken ownership of quality. Ownership means understanding what you’re testing and why, caring about what the output tells you, and being willing to have the difficult conversation when the signal is ambiguous or the risk isn’t well understood. Generating a test file and merging it doesn’t require any of that.

The risk isn’t that AI keeps people away from quality. It’s that AI gives people the feeling of participating in quality without the responsibility that real participation requires.

This is the illusion. And it’s more dangerous than the old problem, where quality was visibly siloed in a test team, because at least then everyone knew where the accountability sat. Now it’s diffuse, assumed, and largely unexamined.


What the Culture Problem Actually Looks Like Now

In the pre-AI version of this problem, the cultural challenge was getting quality onto the agenda of people who didn’t think it was their concern. Engineers who threw work over the wall. Product managers who equated shipping speed with success. Leaders who saw testing as a cost centre and quality as someone else’s job.

That work isn’t done. But there’s a new layer on top of it.

The new challenge is distinguishing between teams that have genuine shared quality ownership and teams that have the appearance of it. The difference is harder to spot than it used to be because the surface signals look similar. Tests exist. Pipelines run. Metrics are reported. AI-generated coverage looks impressive on a dashboard.

What you don’t see on the dashboard is whether anyone in the team could tell you what risk those tests are actually managing. Whether there’s a real conversation happening when something unexpected surfaces. Whether the quality metrics are informing decisions or just decorating sprint reviews.

I wrote in 2020 about a moment where I walked into a conversation in a canteen and found people from outside the test team debating quality metrics with genuine investment, without anyone from QE even being present. That moment mattered because it was evidence of something real. People had internalised quality thinking, not just quality tooling. They were applying judgment, not just running reports.

That’s what genuine shared ownership looks like. And I’d ask honestly: how many teams using AI-assisted quality tooling could produce a moment like that today?


Why Human Advocacy Matters More Than Ever

The answer to this challenge isn’t to pull back from AI. It’s to invest more heavily in the human work that AI can’t do, which is building the genuine understanding, care, and accountability that makes quality culture real rather than performed.

I’ve always believed that quality culture is built through advocates, people outside the QE team who get it, who feel it, who carry the message into conversations you’re not in. In 2020 I was writing about how to find those people and bring them along. That work is still essential. If anything, it’s more essential now.

Because the advocates you need in an AI-augmented environment aren’t just people who understand that quality matters. They’re people who understand the difference between AI-generated quality signals and genuine quality confidence. Who know when to trust the dashboard and when to ask harder questions. Who can recognise the confidence theatre that the previous post described, and refuse to accept it as good enough.

Those people don’t emerge from tooling adoption. They emerge from relationships, conversations, and the kind of leadership that takes quality seriously enough to explain it rather than just deploy it.


Three Things That Don’t Change

For all the ways AI shifts the culture challenge, some things remain stubbornly true.

Quality culture is still built in the small moments, not the big ones. The sprint review where someone asks “but what are we actually confident about?” The retrospective where a near-miss gets treated as a learning rather than an embarrassment. The conversation where a product manager asks to understand the risk before a release rather than after. AI doesn’t create those moments. Leaders do.

Celebrating the right things still matters, and it’s harder now. In a pre-AI world, celebrating a team that introduced meaningful automation was fairly straightforward. The effort was visible and the contribution was clear. Now you need to be more deliberate about what you celebrate. Celebrating test volume rewards the wrong behaviour. Celebrating the team that questioned what their AI-generated suite was actually covering, improved it, and documented their reasoning: that’s the kind of success worth amplifying.

The quality narrative still has to be told by humans. The concept of the Quality Narrative, how quality is perceived and talked about across your organisation, is something I’ve come back to repeatedly over the years. In an AI world, that narrative is at risk of being written by the tooling rather than by people. Metrics produced by AI systems, dashboards automatically populated, coverage reports generated without human interpretation. The narrative becomes: “our AI handles quality.” And that narrative, left unchallenged, will quietly erode everything you’ve built.


What Intelligent Quality Leadership Looks Like Here

The Leadership + QE corner of the triangle is, in some ways, the hardest one. Cognitive Automation gives you something to implement. AI Governance gives you something to advocate for. Quality Culture asks you to change how people think and feel about something they mostly don’t notice when it’s working.

In an AI-augmented environment, that work requires a specific kind of leadership posture. You have to be the person who asks the uncomfortable question when AI-generated confidence is being mistaken for real confidence. Who insists that shared ownership means shared understanding, not just shared access to tooling. Who builds the advocates that carry quality thinking into the rooms you’re not in, and equips them to distinguish between the appearance of quality and the real thing.

None of that is new. But the urgency is.

The teams that will get this right aren’t the ones that adopt AI fastest. They’re the ones that invest in the human infrastructure alongside it. The relationships, the conversations, the advocates, the honest moments in sprint reviews and retrospectives where someone says “I don’t think we’re actually confident here, and I think we need to talk about why.”

That’s quality culture. It was hard to build before. It’s harder now. And it matters more than it ever has.


If this resonates, I’d genuinely like to hear how you’re navigating it. Is shared ownership getting harder in your organisation as AI tooling proliferates? Have you found ways to build the human advocacy work alongside the technology adoption? I’ve been developing this thinking over a few years now, and the most useful input has always come from people doing the work in real teams.


This is the fourth post in the Intelligent Quality Leadership series. The next and final post will bring the triangle together as a practical operating model, drawing on the community feedback that’s accumulated across the series.

Leave a comment