
A couple of weeks ago, I found myself staring at a top funded competitor's announcement, feeling that familiar wave of impostor syndrome wash over me.
They had just launched a feature showing "exact prompt search volume" for AI chatbots. I was constantly second-guessing our strategy at AuraScope, always thinking we're not doing enough. If they had this data, why didn't we? Were we missing a massive piece of the AEO puzzle?
So, our team did what we always do. We leaned into first-principles thinking and ran an experiment. We spent the time and money to reverse-engineer how these platforms are getting their numbers. We ran a custom Python script testing DataForSEO's AI Keyword Volume API against queries for a global wellness brand called WellnessGlowClub.
The result? Absolute zero across the board.
It turns out, the fundamental problem is that AI prompt volume has no established measurement infrastructure. Unlike Google search volume, which has 20+ years of clickstream data, OpenAI, Anthropic, and Google keep their prompt logs strictly private.
The Core Challenge: LLMs Are Black Boxes
The foundational problem in Answer Engine Optimisation is that major AI providers do not publicly share their prompt logs. Unlike traditional SEO, where Google Search Volume provided a reliable baseline, AI interactions are highly personalised and conversational.
Consider this: the average ChatGPT prompt is six times longer than a traditional Google search. That means the exact volume of any specific prompt essentially approaches one. The entire concept of "search volume" breaks down when every query is unique.
How the Industry Actually Estimates "Prompt Volume"
Since direct data isn't available, AEO platforms are combining five distinct data sources to estimate demand:
1. Clickstream Panel Data
This is the backbone for the biggest players. They use data from millions of opt-in consumer panels (often via browser extensions) to monitor what users type into AI platforms. The catch? These panels represent less than 1% of actual daily AI prompts, skew heavily toward tech-savvy desktop users, and miss B2B enterprise users entirely.
2. Google Keyword Data as a Proxy
Some tools use traditional search volume and features like "People Also Ask" as a proxy, operating on the logic that core user interests remain the same even if phrasing changes when talking to an AI. It's a reasonable assumption, but it's still a guess.
3. Synthetic Generation via LLMs
Some use AI to simulate buyer personas and generate thousands of prompt phrasing variations based on website content. Essentially, they're using AI to predict what people might ask AI. It's turtles all the way down.
4. Community and Forum Mining
Unfiltered questions from Reddit, Quora, and Stack Overflow serve as high-fidelity proxies because users write on these platforms using the same natural language they use with AI chatbots. This is actually one of the more honest approaches.
5. Google Search Console Data
This is an underutilised goldmine. By applying a specific regex filter to GSC to isolate queries of 10 or more words, you can reveal the actual, conversational prompts users submit to Google's AI Mode. This currently stands as the only true source of first-party AI prompt data.
The Experiment Results: WellnessGlowClub
To put this in perspective, here is what the data actually showed when we ran our test:
- Reported "Prompt Volume" (Competitor APIs): 0
- Google Keyword Search Volume: 0-10
- Actual AI Overview Presence (AuraScope): 100%
Takeaway: Volume metrics show zero demand, but AI platforms are actively generating answers for these exact queries.
The Industry Debate: Is Prompt Volume Even Valid?
Which leads me to a slightly uncomfortable truth about our industry right now. When an AI visibility tool claims they can tell you exactly how many times people asked ChatGPT "best digital planner for women," the industry itself is deeply divided on whether you should trust those numbers.
The Skeptics
Some argue that volume metrics are fundamentally flawed. Clickstream panels represent less than 1% of actual daily AI prompts, skew heavily toward tech-savvy desktop users, miss B2B enterprise users entirely, and can be ethically questionable. When you extrapolate from a sample that small and that biased, you're not measuring. You're guessing.
The Pragmatists
Others believe that while the numbers aren't perfectly exact, they are highly valuable as a directional metric. It allows marketers to compare the relative popularity of different topics to prioritise their efforts. Fair enough, but let's call it what it is: a directional signal, not a volume metric.
The Uncomfortable Truth
So when a competitor puts an exact number next to "AI prompt volume," they are doing one of three things:
- Lying & Guessing: Relying on tiny opt-in browser extension panels and extrapolating using Google search volume—a completely different user behaviour.
- Corporate Espionage: Unless they have a mole inside OpenAI scraping server logs, getting actual, unfiltered, global prompt volume simply isn't technically possible.
- Secret Backroom Deals: Knowing how tightly AI giants guard their data as their primary moat, a secret deal with a random marketing SaaS tool seems highly unlikely.
Sometimes it's so easy to get caught up in chasing vanity metrics that we forget what we're actually trying to solve. The AEO industry is right where SEO was in the early 2000s, we know the demand exists, we just can't measure it with a neat little volume metric yet.
The Strategic Shift: Discovery Over Volume
I'm bullish that chasing fake prompt volume is a distraction. The most advanced AEO strategies are pivoting away from chasing exact volume numbers and focusing instead on prompt discovery and intent mapping.
Frameworks like Zenith's 7-step method emphasise sourcing "blocker" questions directly from Sales and Customer Success teams, reverse-engineering competitor content, and focusing on high business value rather than raw search volume.
At AuraScope, we advocate for building prompt sets based around buyer personas and categorising them into AI-native intent families, like "generative intent" (creation and planning) versus open-ended exploration.
The real opportunity is in the unmeasured space, those long-tail, conversational queries with zero Google search volume but active AI Overviews.
From Black Box to Glass House
This is why, here in our lab in Aotearoa, we aren't writing off the concept of prompt volume forever. We're keeping a close eye on the horizon for actual, research-backed measurement standards to emerge. But right now, chasing extrapolated vanity metrics is a distraction.
Instead, our research is entirely focused on ground truth: when a user does ask a relevant question, does the AI recommend you?
We're shifting the measurement framework from estimated volume to actual visibility. By running multi-model diagnostics across ChatGPT, Gemini, and Perplexity, we can measure true Share of Answers and extract the exact URLs the models cite (what we call Citation Forensics).
We aren't trying to guess the inputs of the LLM black box anymore. We're measuring the outputs to build a glass house.
Building the AEO Standard in Public
The primary reason we're publishing this research isn't to dunk on competitors. It's because we need robust, intellectually honest frameworks to move this industry forward, not snake oil.
The winning strategy in AEO isn't claiming to have the most accurate prompt volume metrics. It's focusing on actionable, topic-level demand signals, prioritising high-value prompt discovery, and leveraging under-utilised tools like GSC's long-tail query data to understand real user intent.
Focusing on intelligence optimisation over crawler optimisation requires a massive paradigm shift. It means creating true human content, not AI slop, and measuring success by citation authority rather than search volume.
We're committed to building this measurement infrastructure in public. If you're a marketing leader, researcher, or just an AEO nerd trying to solve these same attribution puzzles, I'd love to compare notes. My DMs are always open for legends who want to help build a durable, honest strategy for the AI era.








