Residential Property

How Stylo Digital lifted a residential client's AEO score in days, and held it for weeks

Stylo Digital was asked to lift a residential client's AI visibility without going broad on PR. They focused on two on-page levers, crawlability and freshness. AuraScope picked up the lift in five days, and twelve consecutive every-other-day re-audits later, it held. Agent readability is durable infrastructure, not a campaign.

Background

Stylo Digital is an Australasian agency executing AEO programmes for residential property, professional services and B2B clients. The featured client is a residential-property brand operating across multiple suburbs. Before this engagement, the client ranked acceptably on classic search but their LLM citations were probabilistic: sometimes returned, often not. Stylo's brief was explicit, lift AI visibility without a PR or earned-media push. The work isolated two on-page levers (crawlability and freshness) and held external trust at zero for the duration so each lever's contribution could be read cleanly.

The challenge

When an AI agent crawls a site, it does not read the page the way a human does. It parses headings, structured data, timestamps and internal links. If those signals are weak or inconsistent, the model cannot confidently extract facts from your page, and citations fall back on probability, sometimes you, often not.

This client looked fine to a human reader but was hard for an LLM agent to parse with confidence. Stylo and the AuraScope audit pointed at three specific fixes.

The solution

No off-site work. No earned-media push. No new content campaign. Three focused on-page changes, sequenced cleanly and deployed together.

  • Heading hierarchy rebuilt across the priority page set.
  • FAQPage schema rolled out site-wide, completing the four-schema set. Article, Organization and BreadcrumbList were already in place; FAQPage was the missing fourth.
  • Content refresh sweep brought multiple pages back under the 90-day freshness threshold.

Within days: the audit picked up the lift

The structural changes deployed on April 17 2026. The first AuraScope re-audit to capture the new baseline ran on April 22 2026, five days later, against the same frozen prompt set used in the March 8 baseline. The pillar deltas:

  • Page Structure: +12 points. Valid heading hierarchy now present on 4 of 11 priority pages, up from 0. The remaining seven pages are in Stylo's phase-two queue and read as scope, not failure.
  • Schema Markup: +5 points. Complete four-schema set live and validating.
  • Content Freshness: +6 points. Mean content age dropped from 123 days to 57 days.

Overall structural-health score moved from 51 to 57. The audit detected the lift inside the same week the changes deployed.

And held: agent readability is durable infrastructure

This is the part the case study exists for. Five days to detect was the easy claim. The harder claim is what came after.

From April 22 through May 14, AuraScope ran every-other-day re-audits, twelve consecutive observations under the frozen prompt set. The pillar scores held. No regression on Page Structure. No regression on Schema. Content age continued to drift back upward as time passed (freshness needs ongoing maintenance), but the structural and schema lifts persisted with zero rework.

Crucially, AuraScope's control prompt set ran across the same twelve audits and stayed flat. That tells us the engines themselves did not drift during the durability window, so the held deltas reflect the brand's structural state, not an engine-side shift.

That is the thesis. Earned-media placements need ongoing pitching. Content cadence needs ongoing publishing. Crawlability and schema, once correct, stay correct. They are durable infrastructure, not a campaign. An agency that gets a client's structural signals right once does not have to rebuild that lift next quarter. The audit keeps watching every other day so any regression surfaces at the next scheduled run.

Mechanism

Page structure tells an answer engine what a page is about. Schema markup says it in machine-explicit form. Freshness signals tell the engine the content is current. When all three are clean, the engine's confidence in extracting facts from the page measurably rises, and citation consistency improves. The audit detects this within days because retrieval re-fetches the page on each query, not on a training cycle.

One caveat on the mechanism. LLM responses remain probabilistic by construction. "Citation consistency improves" is not "citations become deterministic." Stochastic-by-design is the floor; structural cleanliness raises the ceiling.

How we measured this

Read the canonical methodology page for the full protocol. Specifics for this engagement:

  • Baseline audit: March 8 2026.
  • Deploy date: April 17 2026.
  • First post-deploy re-audit: April 22 2026 (deploy + 5 days).
  • Durability observation window: April 22 to May 14 2026 (12 consecutive every-other-day audits).
  • Engines tracked: ChatGPT, Gemini and Perplexity.
  • Pillars compared: Page Structure, Schema Markup, Content Freshness, External Trust, Share of Answers.
  • Frozen prompt set held across the baseline and durability windows for the four crawl-derived pillars. Share-of-answers tracked but not reported as a delta because the prompt set evolved late in the window; flagged explicitly rather than implying comparability.
  • Control prompt set: ran in parallel across all 12 durability re-audits and held flat, so engine drift can be ruled out as an explanation for the sustained pillar deltas.

Caveats

Off-site signals stayed at zero throughout this window by design, so the on-page lever can be read cleanly, but that isolation also caps the ceiling. A similar on-page programme paired with an external-trust push would move the overall AI-visibility score further. Page Structure ended at 4 of 11 priority pages compliant; the remaining seven are in Stylo's phase-two queue, so the +12 should be read as phase-one delta, not site-wide completion. Share-of-answers is not directly comparable across the two audits because the prompt set evolved between them; we kept it out of the delta table for that reason. AI-search attribution is the open problem we are actively working on; the methodology page describes the current state of that model.

If you're evaluating AuraScope

Agent readability isn't one thing. It's the combination of crawlability, schema markup and freshness signals that lets an LLM agent extract facts from your page with confidence. What AuraScope contributed:

  • Pillar-level scoring across External Trust, Crawlability, Freshness, Coverage and Share of Answers, so the team could measure each lever in isolation.
  • Page-level audits: heading hierarchy, schema completeness, content age, internal linking, surfaced per URL.
  • Recommendation prioritisation: fixes ranked by audit impact so the team works the largest lever first.
  • Days-to-detection: the audit picks up structural changes within days of deploy, and continues observing every other day so any regression surfaces at the next scheduled run.
  • Control-group calibration: a frozen control prompt set runs alongside the engagement so engine drift can be separated from brand lift.

If you're an agency or in-house team running an AEO programme and want the fastest feedback loop on which signals are working, AuraScope is built for that workflow.

Features snapshot

  • Audit cadence: every other day across ChatGPT, Gemini and Perplexity.
  • Pillars scored: External Trust, Crawlability, Freshness, Coverage, Share of Answers.
  • Detection speed: structural changes picked up within days of deploy.
  • Durability check: every-other-day re-audits surface any regression at the next scheduled audit.
  • Engine-drift check: control prompt set runs in parallel to separate brand lift from engine shift.
  • Pricing: see /pricing.
  • Trial: book a demo on any page.
  • Methodology: how we measure AI visibility.

See how AuraScope rates your brand on AI engines

Daily audits across ChatGPT, Gemini and Perplexity. A prioritised list of what to change, in what order.

View pricing