What is an AI visibility audit — and does your brand need one?

Your client’s SEO is solid, they rank on page one, and the monthly reports you send have all markers of SEO with green arrows pointing up. The work is delivering what traditional metrics say it should, but somewhere right now, one of the client’s best prospects is typing a question into ChatGPT or Perplexity and getting an answer that recommends three brands in the category — and your client isn’t one of them.

That’s not a hypothetical — I am seeing it across industries, including a brand with genuinely strong SEO whose content was being used by AI to explain why a competitor was the better choice. The traditional rankings are good and the content marketing strategy is there, but AI visibility is actively working against them — and the agency managing their SEO had no visibility into it.

An AI visibility audit is how you find out which situation your clients are in, before they find out themselves.

What is an AI visibility audit?

An AI visibility audit is a systematic process of testing how AI search platforms respond to queries about a brand, its category and its competitors. That means ChatGPT, Perplexity, Google AI Overviews, Gemini, Claude and others — each of which has different citation behavior, different training data and different trust signals.

Unlike a traditional SEO audit, which looks at rankings, backlinks, technical errors and keyword coverage, an AI visibility audit asks a different set of questions: what does AI actually say about your client? Are they mentioned? Are they recommended? Are they described accurately? And when someone asks which brand to choose, do they make the list?

The results tend to surprise agency owners more than anyone. Clients with strong SEO and years of content investment turn out to be invisible in AI-generated answers, while newer competitors with less domain authority are cited regularly because their content is structured in a way AI can extract and trust. Strong SEO and strong AI visibility are not the same thing, and they don’t always move together — which means your current reporting isn’t showing the full picture. The question isn’t whether AI is describing brands in your clients’ categories. It is. The question is whether it’s describing theirs.

Why this is now an agency problem

AI search isn’t coming. It’s here. The average ChatGPT prompt is 23 words — not the 4-word search query Google trained everyone to optimize for. Buyers are asking AI full questions, describing problems and asking for comparisons and recommendations, then getting answers rather than a list of links to sort through themselves.

Server logs are already showing AI bots crawling client sites across industries. Those bots aren’t browsing — they’re researching, indexing and deciding what to cite. Most brands have no idea this is happening, and most of their agencies don’t either.

The agencies that get ahead of this are the ones that bring AI visibility into the conversation before a client notices the gap. The ones that wait are going to be in a reactive conversation explaining why a competitor is showing up in ChatGPT answers and their client isn’t — and that’s a much harder conversation to have from behind.

What the audit actually covers

A thorough AI visibility audit has several layers, and each one requires different work:

1. Prompt testing across platforms

This is the core of the audit. You build a set of 20 to 50 high-intent queries — the questions the target audience would actually ask AI — and run them across every platform, not just the obvious ones. Because each platform has different citation behavior, a brand that appears in Perplexity answers may be invisible in Google AI Overviews. For each response you document whether the client is mentioned, where in the response, whether the description is accurate, what sources are being cited and how competitors are positioned relative to them.

2. Answer share calculation

Answer share — sometimes called AI share of voice — is the percentage of relevant AI-generated answers that mention or recommend the brand, calculated per platform and in aggregate. This becomes the baseline, and it’s the new metric clients need to see alongside traditional rankings because it measures something their current reporting completely misses.

3. Citation source analysis

When AI does mention the brand, what is it pulling from? The client’s own website, a third-party directory, an outdated press release, a Reddit thread with inaccurate information? The source matters because it tells you whether owned content is doing the work or whether AI is building its understanding of the brand from sources nobody controls.

4. Knowledge graph and entity audit

AI systems cross-reference a brand across multiple sources to verify what it is, what it does and whether it’s trustworthy — including the website, LinkedIn, Google Business Profile, Wikidata, industry directories and anywhere else the brand is described. If those descriptions are inconsistent across platforms, AI treats that as a trust signal problem. Entity coherence is foundational to AI visibility and most brands have never audited it. In one audit, a medical device brand was appearing in AI-generated answers regularly — but the sources being cited were third-party directories and outdated media coverage, not their own content. They had no control over what AI was saying about them or where it was pulling from, and they had no idea until the audit surfaced it.

5. Content structure assessment

AI doesn’t read a page the way a human does. It extracts specific passages that directly answer questions, which means content built around keywords and optimized for human readability often fails the machine readability test entirely. The audit evaluates whether key pages have self-contained answer blocks, clear headings, FAQ sections and structured information AI can pull as a standalone response. This assessment also extends beyond the website. LinkedIn articles published from a brand page are cited by AI systems — particularly for B2B queries — and most brands are ignoring this entirely. Splitting content creation across owned channels rather than treating the website as the only source worth optimizing is one of the higher-leverage moves an agency can make for a client, and it’s often the lowest-hanging fruit the audit identifies.

6. Technical and schema review

Schema markup is the machine-readable layer that tells AI systems exactly what content is about — Organization schema, Person schema, FAQPage schema, Article schema. Most sites have none of it or have it only partially implemented, and the audit identifies what’s present, what’s missing and what would move the needle most.

7. Competitive benchmarking

You run the same audit on the top competitors: where are they showing up that the client isn’t, what sources are AI citing when it recommends them, and what are they doing structurally that the client isn’t? This is where the gaps become actionable and where the client conversation gets real.

Can your team do this?

Yes, and understanding the process is valuable regardless of who runs it. But agencies considering adding this to their service offering should go in with clear eyes about what it actually requires per client, on an ongoing basis.

The learning investment is real

Before running a single prompt, your team needs to understand how each platform works, what it trusts and why. These platforms have meaningfully different citation behaviors, and what works for one doesn’t automatically work for another. Research from citation analysis studies shows only 11% of domains receive citations from both ChatGPT and Perplexity, so optimizing for one without understanding the others leaves meaningful coverage on the table.

Learning this well — not surface-level well, but well enough to build client strategy around it — takes weeks of focused study. There are frameworks, research papers, industry analyses and emerging best practices being documented in real time, and someone on your team needs to read them, synthesize them and understand how they apply across your specific client base. That’s not a one-afternoon project, and it’s not a one-time investment either because the landscape keeps moving.

The audit itself is time-intensive, per client

Building a prompt set of 20 to 50 queries, running them across five or six platforms, documenting every response, calculating answer share, analyzing citation sources, auditing knowledge graph presence, assessing content structure, reviewing schema implementation and benchmarking three to five competitors is a multi-day project for a client of any real size — not a few hours. Multiply that across your client portfolio and it becomes a significant capacity question that's worth thinking through before you start scoping it into retainers.

The refresh problem scales with your roster

AI platforms update constantly, citation patterns shift, new platforms emerge and Google regularly rolls out changes to AI Overviews and AI Mode. What’s true about a client’s AI visibility in March looks different in June, which means an AI visibility audit isn’t a one-time deliverable you scope once. Clients building real AI visibility need auditing quarterly at minimum, continuous monitoring and content refreshed on a cadence that keeps pace with how AI systems are evolving. For an agency managing ten or fifteen clients, the operational question of who owns this work and how it gets done is not a small one, and the agencies figuring that out now are the ones who will be able to offer it as a real service rather than a rushed add-on.

What a specialist embedded in your agency gets you

The case for bringing in someone who does this across multiple agencies and industries isn’t just about capacity, though that matters. It’s about what cross-industry pattern recognition does for your clients’ results in a space that’s moving faster than any published framework can keep up with.

When you’re running AI visibility work across multiple agencies and multiple industries simultaneously, you see things that haven’t been written about yet. A citation pattern shift that shows up in a manufacturing client’s data one week shows up in a healthcare client’s data two weeks later, and a content structure change that improves answer share in one category starts working in another before anyone publishes a case study about it. The learning compounds across clients in a way it simply cannot when you’re working inside a single agency with a defined client set.

By the time most of what I’m observing gets documented publicly, I’ve already tested it, adjusted for it and moved on to the next thing — and that’s not something that comes from reading the same articles everyone else is reading.

What this model also provides is the prioritization your clients actually need. Not everything flagged in an audit needs to be fixed immediately, and not everything that sounds impressive is worth the effort. Knowing which gaps are costing citations and which are noise is a skill that comes from doing this across enough industries that the patterns become obvious, and your clients get that judgment applied to their specific situation rather than a generic checklist.

Where to start

If AI visibility isn’t part of your current client reporting, a good first step is running a basic prompt test on one of your clients’ categories. Open ChatGPT, Perplexity and Google AI and ask the question their best prospect would ask when looking for what they offer, then look at what comes back, who’s mentioned and whether your client is there — and if they are, what AI is actually saying about them.

That’s not a full audit, but it’s a fast way to understand whether this is urgent for your clients or whether they have more runway than you thought. Most agencies are surprised by what they find.

If what you find warrants a deeper look, or if you want to add AI visibility as a service without building the internal infrastructure from scratch, that’s exactly how LSX Partners works with agencies — embedded alongside your team, running the audit and strategy so you can deliver it to clients without adding headcount.

Ready to find out where your clients stand? Get in touch at lsxpartners.com/contact.

About the author

Laura Seelinger is the founder of LSX Partners, a marketing strategy firm based in Columbus, Ohio. She works as an embedded AI visibility strategist for agencies, bringing 15 years of experience across agency and corporate marketing to client engagements in healthcare, manufacturing, building products, hospitality and CPG. Her cross-industry work gives her a view of how AI citation patterns are evolving before they’re widely documented.

Next
Next

AI visibility as an agency service