AI Voice Agents

How Waboom AI landed 4 ChatGPT-sourced leads in 48 hours

Waboom is an AI voice agent startup that also runs corporate AI training. After an 8-week optimisation cycle pulling all four AI-visibility levers, the 48-hour post-launch window delivered four inbound ChatGPT-sourced leads across three countries.

Background

Waboom AI is a startup building AI voice agents for businesses, with a complementary corporate AI training arm. Their ICP spans US, NZ and AU SMBs evaluating voice agents and enterprises commissioning hands-on AI training. Before this engagement, Waboom was paying for traditional SEO tooling at a cost roughly three times its return in actionable lift, and was largely absent from ChatGPT's answer set for either offering.

"We had used Ahrefs in the past, but the cost was almost triple and it was not giving me the action points I needed. AuraScope gave us clearer visibility into what needed fixing and updating on our site."

Leonardo Garcia Curtis, Founder, Waboom AI

The challenge

Waboom's work was strong. The problem was visibility, not quality. When SMB buyers asked ChatGPT "who can run AI training for my team?" or "which voice-agent vendors should I look at?", the answer set was dominated by larger, closer US-based providers. Without LLM pickup, Waboom had to earn every deal cold.

Ahrefs returned keyword volume. It did not answer the only question that mattered: what specifically do we change on our site to get recommended by ChatGPT.

The solution: an 8-week optimisation cycle

Waboom rebuilt their site for the AI-search era and used AuraScope to point that work at the highest-leverage fixes. The work ran for two months. The full four-lever stack was pulled in sequence.

  • Crawlability. Every priority page was rewritten so an LLM agent could parse it cleanly: valid heading hierarchy, semantic markup, internal linking. Implementation executed by AI tooling under Leonardo's strategic direction.
  • Freshness. Each page now displays a visible created and last-edited timestamp so agents can verify currency.
  • External trust. Real-world AI training events and guest-speaker appearances generated earned links back to waboom.ai. In parallel, Waboom started asking clients for reviews: 15+ new Google reviews landed in eight weeks.
  • Coverage. A three-month article cadence, one piece every two days, each answering one ICP question already being asked of the LLM. Publishing pipeline AI-automated.

The results: a 48-hour lead window

Within 48 hours of the optimised site going live, Waboom received four inbound enquiries: two from New Zealand, one from Australia, one from the USA. Every one of the four named ChatGPT as the referral source in their first message.

  • 4 inbound leads in the 48-hour post-launch window.
  • 3 countries: NZ, AU and USA in the same window.
  • 15+ new Google reviews over the surrounding eight-week build.

One prospect pasted the recommendation verbatim:

"ChatGPT suggested Waboom AI as a great starting point for virtual classes for business teams."

Inbound prospect, sourced via ChatGPT

Inbound prospect message naming ChatGPT as the referral source for Waboom AI
The inbound message naming ChatGPT as the source of the recommendation. Captured the day it landed.

A US-based prospect choosing an NZ vendor in the same window matters here. We do not have a side-by-side ChatGPT prompt log proving the engine ranked Waboom above named US alternatives, so we present this as observed buyer behaviour, not as a head-to-head ranking claim.

Mechanism

LLMs preferentially recommend brands that combine all four levers: external trust (links, reviews), crawlability (semantic, schema-marked pages), freshness (visible timestamps), and coverage (content that answers the buyer's actual question). AuraScope identified which pages and which levers needed work on waboom.ai. The team executed the list. AEO pillar scores rose during the same window the first ChatGPT-sourced leads landed. We treat that as directional confirmation, not as proof of a single causal chain.

How we measured this

Read the canonical methodology page for the full protocol. The relevant specifics for this engagement:

  • Engagement: 8-week optimisation cycle.
  • Engines tracked: ChatGPT, Gemini and Perplexity, every-other-day audits.
  • Lead window: 48 hours immediately post-launch.
  • Lead attribution: All four inbound enquiries explicitly named ChatGPT as the referral source. One verbatim prospect quote retained as third-party evidence.
  • Attribution mode: ChatGPT's answer-time retrieval, not training-data inclusion. See methodology.
  • Control prompt set: ran in parallel during this window to detect engine drift; no control-side movement was observed that would explain the brand-side deltas.

Caveats

LLM recommendations are probabilistic. The same query can return different results across users and sessions. AuraScope can show that AEO scores rose alongside the implementation of these changes, that the 48-hour post-launch lead spike happened, and that all four inbound leads named ChatGPT as the source. We cannot prove ChatGPT recommended Waboom solely because of these specific changes, nor that a future query will return the same answer. AI-search attribution is the open problem we are actively working on; the methodology page describes the current state of that model.

If you're evaluating AuraScope

Waboom's story compresses the AuraScope value loop into one sentence: see where you are, fix the highest-leverage lever, ship, measure.

What AuraScope contributed:

  • Every-other-day AEO audits across ChatGPT, Gemini and Perplexity.
  • Page-level recommendations mapped to the four-lever model (external trust, crawlability, freshness, coverage), prioritised by audit impact.
  • Citation tracking on the brand and competitors, surfacing where Waboom was being mentioned and by whom.
  • Buyer-prompt monitoring: share-of-answers on the questions buyers actually ask, not vanity keyword volume.
  • Control-group calibration so engine drift can be separated from brand lift.

If your team is in Waboom's position (quality work, weak LLM pickup, budget pressure on the legacy SEO stack), the AuraScope playbook compresses months of guessing into a focused two-month sprint.

Features snapshot

  • Audit cadence: every other day across ChatGPT, Gemini and Perplexity.
  • Setup: typically 10 to 20 minutes for a single brand, up to an hour for multi-locale.
  • Pricing: see /pricing.
  • Trial: book a demo on any page.
  • Methodology: how we measure AI visibility.

See how AuraScope rates your brand on AI engines

Daily audits across ChatGPT, Gemini and Perplexity. A prioritised list of what to change, in what order.

View pricing