How PostClickSignal grades ad-to-page message match.
Every report on PostClickSignal scores a paid ad against its landing page on a 0–10 scale across four dimensions: headline match, offer continuity, visual + tone, and scent + intent. The weights vary by platform. This page documents the rubric, the pipeline that produces each score, and what we explicitly do not measure. The goal: every score should be defensible and reproducible.
Overview.
Message match is the alignment between what a paid ad promises and what the visitor finds when they click. Strong message match makes the visit feel inevitable. Weak message match makes it feel like a bait-and-switch. Most paid-acquisition waste sits in the gap between the two.
Our rubric breaks message match into four orthogonal dimensions. Each is scored 0–10 against the page's above-the-fold content, which is the only viewport most visitors see before deciding whether to scroll or bounce. The overall score is a weighted average. Weights vary by platform because the dominant signal differs: Google paid search rewards keyword echo, Meta rewards tonal continuity, and LinkedIn sits between with a structural premium on offer continuity.
The four dimensions.
// dim · 01
Headline match
weight (G) 35%
Does the page's H1 echo the ad's headline or implied keyword? A visitor scanning above-the-fold should see the language they just clicked on.
9–10 · earns
H1 contains the exact ad headline or the keyword theme the ad targets.
0–2 · earns
H1 talks about a different product or category than the ad.
// dim · 02
Offer continuity
weight (G) 30%
Is the specific offer the ad promised discoverable above the fold? CTAs, pricing, free trials, downloads, demos. Whatever the ad said the visitor would get.
9–10 · earns
Ad's primary offer is the page's primary CTA, with minimal friction between click and offer.
0–2 · earns
Page's above-fold CTA is a different (often higher-friction) action than the ad promised.
// dim · 03
Visual + tone
weight (M) 35%
Does the page's visual identity and tone match the ad's creative? Color palette, typography, urgency, formality, and imagery should feel continuous.
9–10 · earns
Page hero looks and reads like a continuation of the ad creative.
0–2 · earns
Ad is playful, page is corporate. Or: ad urgent, page reference. Tonal whiplash.
// dim · 04
Scent + intent
weight (G) 20%
Does the page answer the implied search intent without forcing a hunt? A visitor should not need to scroll three screens to confirm they are in the right place.
9–10 · earns
Intent is confirmed in the first 600 vertical pixels.
0–2 · earns
Intent confirmation requires scrolling past 2+ unrelated sections.
Weights by platform.
Different platforms reward different dimensions. Our weights reflect what actually moves performance on each platform.
| Dimension | Meta | ||
|---|---|---|---|
| Headline match | 35% | 20% | 20% |
| Offer continuity | 30% | 25% | 30% |
| Visual + tone | 15% | 35% | 30% |
| Scent + intent | 20% | 20% | 20% |
Google paid search is keyword-driven, so the headline echo dominates. Meta is creative-led, so the visual and tonal continuity dominate. LinkedIn sits between, with a structural premium on offer continuity because B2B visitors expect a professional follow-through.
Grading scale.
Letter grades are derived from the weighted overall score. The cut-offs are deliberately conservative: most paid landing pages we audit fall in the C and D bands.
8.0 +
Strong match across all dimensions.
6.5 – 7.9
Solid match with one or two soft spots.
5.0 – 6.4
Mixed. Likely missing the dominant dimension for the platform.
3.5 – 4.9
Notably misaligned. Burns ad spend.
< 3.5
Severe mismatch. Ad and page tell different stories.
Scoring pipeline.
The same seven-step pipeline runs for every report, whether it comes from the seed corpus or a single-ad audit. Determinism comes from prompt-caching the rubric and pinning the model to low temperature.
- 1.
Fetch. We pull the ad creative from the public ad library (Google Ads Transparency Center, Meta Ad Library) or accept it manually from the user.
- 2.
Render. We open the landing page with Playwright at a 1280×800 viewport, capture the above-fold screenshot, and extract the visible text.
- 3.
Normalize. We parse the ad headline, description, and any extracted keyword theme. We parse the page's H1, subheadline, primary CTA, and hero text.
- 4.
Score. A Claude prompt with the rubric (cached as a system prompt) returns four dimension scores plus reasoning text. Temperature 0.2 for consistency.
- 5.
Compose. The LLM also generates the editorial analysis, top three fixes, and the rewrite preview, in a separate structured call.
- 6.
Gate. The quality gate rejects reports with missing required fields, all-zero or all-ten distributions, or generic editorial. Rejects go to a manual review queue.
- 7.
Publish. Passing reports get an indexable URL, sitemap inclusion, and an Article JSON-LD payload.
What we do not measure.
Equally important: what is not in the rubric. Each of these is a real signal in its own right; none of them are message match.
- ×
Page speed (LCP, CLS, INP). Real, but outside the message-match question. Use PageSpeed Insights.
- ×
Conversion rate. We do not know the page's actual conversion rate; we score the alignment, not the outcome.
- ×
SEO content quality. The page might rank well organically and still score poorly here. Different question.
- ×
Below-the-fold content. If the answer requires scrolling, it counts as a scent failure, not as below-the-fold content quality.
- ×
Brand strength. Strong brands can get away with weaker message match; we do not adjust for that.
- ×
Bid strategy or Quality Score. Different metric, different scope.
Data sources.
Every report cites its inputs. We do not score advertising we cannot link back to a public ad library or a user-supplied creative.
- ↳Google Ads Transparency Center · Google ad creatives and destination URLs.
- ↳Meta Ad Library · Meta ad creatives and destination URLs.
- ↳LinkedIn ad library · Creatives only; destination URLs are user-matched via our match UI because LinkedIn does not expose them.
- ↳Playwright on Browserbase · Landing-page renders, screenshots, and above-fold extraction.
- ↳Anthropic Claude · Scoring and editorial composition with a prompt-cached rubric.
Update cadence.
A score is a snapshot of one ad and one above-the-fold at one point in time. We keep the corpus current with three rules.
- ↳Rubric version v1.2 (published 2026-05-12). When weights or definitions change, the version increments and existing reports are re-scored within 7 days.
- ↳Reports re-audit every 30 days. If the above-fold content has changed since the last audit (detected via hash diff), we regenerate the report and bump lastReviewed.
- ↳Reports whose ad or page 404 are marked archived and excluded from corpus stats.
Frequently asked questions.
Is this peer reviewed?▸
No. The rubric is editorial, not academic. Weights are based on our own pattern observation across the audits we have published plus the available literature on landing-page conversion. We invite feedback; see contact on the why-this-exists page.
Why an LLM instead of a deterministic rule engine?▸
Because message-match judgments are linguistic, not boolean. A keyword match like "landing page builder" vs "landing-page-builder" should not score differently; that is the kind of judgment LLMs handle. We use temperature 0.2 and prompt-caching to keep scores reproducible across runs.
Can I dispute a score?▸
Yes. Every report has a "report an issue" link. We review disputes within 7 days and re-score if warranted. We do not remove scores just because the advertiser does not like them.
How do you handle dynamic landing pages?▸
We render the page as a no-cookie, no-referrer browser visit. If the page personalizes by referrer or URL parameter, we may capture a different above-fold than your real visitor sees. We are working on a view-as mode that respects ad-attribution parameters.