Venture Capital

How a NZ venture firm earned a ChatGPT citation in 7 days

One of NZ's most established VCs was absent from ChatGPT's answer set when founders asked where to raise. A single citation on a high-authority NZ finance publication moved AuraScope's external-trust pillar, and the firm was returned consistently in ChatGPT's retrieval answer set within seven days.

Background

The client is one of New Zealand's most established venture capital firms, active across deeptech, consumer and B2B SaaS, backing NZ-based founders at seed and Series A. A decade-plus public track record. Before this engagement, the firm was a recognised name in the NZ founder community but routinely absent from ChatGPT's answer set when international or first-time founders asked where to raise. Internal site quality was not the issue; external trust was.

This case is published with the firm and the destination publication anonymised at their request, which caps the third-party verification a sceptical reader can do. We disclose this up front. The mechanism, pillar evidence and methodology below stand on their own.

The challenge

When a founder asks ChatGPT "which venture firms back early-stage New Zealand founders?", the model is not generating an opinion. It is retrieving and ranking the current consensus across the public web. The firms it names are the firms the rest of the internet has already named.

One of NZ's most notable VCs wanted to be one of those names. Despite ongoing classic-SEO spend, ChatGPT did not consistently surface them in founder-stage queries. What was missing was not on-page quality. It was external trust.

The solution

AuraScope's citation landscape report identified a specific NZ personal-finance publication as the highest-weight earned-media surface for the firm's vertical. "Highest-weight" here means: AuraScope's citation scoring (described in the methodology page) ranked it first across NZ-finance-vertical citations the three engines actually surfaced when answering founder-stage queries.

The firm pitched the publication a topic drawn from their portfolio data, the kind of material the destination editor was willing to publish. The piece went live with a single named citation of the firm.

No new pages on the firm's own site. No schema changes. No content campaign. We verified the on-page freeze by diffing the firm's sitemap and crawl footprint at baseline and at the +14-day re-audit. Just one well-placed external citation moved in that window.

The results

The citation went live. AuraScope re-ran the audit at the +3, +7 and +14-day marks against a frozen founder-stage prompt set. The external trust pillar moved measurably. Cross-engine prompts started returning the firm's name with higher consistency. ChatGPT specifically began including the firm in answer sets where it had not appeared before the citation.

Inside seven days of publication, the firm was being returned consistently in answer-time retrieval for the founder-stage query set the engagement was scoped against. We treat that as correlation tied to a single named intervention, not as proof the citation was the sole cause. AuraScope's control prompt set held flat across the same window, which tells us the engines themselves did not drift during this engagement.

Mechanism

LLMs split the trust signals they weight into internal (what your own pages say) and external (what other authoritative pages say about you). Internal signals prove you speak coherently. External signals prove other authorities believe you. Answer engines reward verification, and a citation on a high-authority third-party site is the fastest off-site lever to manufacture verification because the destination has already done the trust-building work. That authority compounds onto whatever citation it carries.

Crucially, this is a retrieval-mode effect, not a training-data effect. ChatGPT, Gemini and Perplexity all fetch and rank current web content at answer time when grounding is enabled. A newly indexed citation enters that retrieval surface immediately. The seven-day window is the time from publication to consistent inclusion in the engines' answer sets under every-other-day auditing. It is not the time for any model to retrain.

How we measured this

Read the canonical methodology page for the full protocol. Specifics for this engagement:

  • Engagement: ongoing AEO programme.
  • Engines tracked: ChatGPT, Gemini and Perplexity, every-other-day audits.
  • Baseline audit: completed three days prior to citation publication.
  • Intervention: a single named citation published on the destination publication AuraScope's citation landscape had ranked first for the firm's vertical.
  • Re-audit cadence: +3, +7 and +14 days post-publish, mapped to the nearest scheduled audit day.
  • On-page freeze: verified by sitemap and crawl-footprint diff across the 14-day window.
  • Metrics tracked: external-trust pillar, share-of-answers, citation-source enumeration.
  • Attribution mode: answer-time retrieval, not training-data inclusion.
  • Control prompt set: ran in parallel; held flat across the window, so engine drift can be ruled out as an explanation for the external-trust delta.

Caveats

Case anonymised at the firm's request. Without a named firm and publication, a sceptical reader cannot independently reproduce this result; that is the ceiling we accept on this case. What we can show: pillar movement tied to a specific dated citation; on-page freeze verified by diff; share-of-answers rising in the post-publish window; control prompt set flat in the same window. What we cannot prove: that the citation was the sole driver of every subsequent answer-set mention, or that a guest post on a less authoritative destination would replicate this result. The mechanism depends on the destination already carrying authority. A content-farm placement will not reproduce it. AI-search attribution is the open problem we are actively working on; the methodology page describes the current state of that model.

If you're evaluating AuraScope

The earned-media playbook is not a one-off tactic. It is a repeatable lever surfaced by the AuraScope audit.

What AuraScope contributed:

  • Citation landscape: which high-authority third-party surfaces the engines actually weight when answering the firm's buyer prompts.
  • External-trust pillar scoring that moves measurably with each placement, so the team can see if the citation worked.
  • Cross-engine prompt tracking on a frozen founder-stage prompt set, with share-of-answers logged on every audit (every other day).
  • Re-audit cadence at the standard +3, +7 and +14-day marks.
  • Control-group calibration so engine drift can be separated from brand lift.

If your business is well-known to humans in your niche but invisible to the LLMs your buyers ask, an earned-media placement on the right third-party site is usually the fastest off-site lever. AuraScope tells you which site, on what topic, and whether it worked.

Features snapshot

  • Audit cadence: every other day across ChatGPT, Gemini and Perplexity.
  • Citation surfacing: tracks where the brand and competitors appear across the third-party web.
  • External-trust pillar score: moves measurably with each high-authority citation.
  • Pricing: see /pricing.
  • Trial: book a demo on any page.
  • Methodology: how we measure AI visibility.

See how AuraScope rates your brand on AI engines

Daily audits across ChatGPT, Gemini and Perplexity. A prioritised list of what to change, in what order.

View pricing