Foresyte·Methodology·Working version · 2026-04-30
← Back to results
The honest version

How Foresyte does the maths.

Every number on a Foresyte page — your score, your tier, the crowd range, a community estimate — comes out of one of the formulas below. We’d rather be a bit boring and a lot transparent than dress this up.

§1

How a pick is scored

The dollar mechanic

Each pick is a dollar prediction. When the property sells, your prediction is compared to the sale price as a percentage of the actual. Within 15% of actual gets you points; beyond 15% gets you zero. The closer you call it, the more points you score.

base_points = max(0, 10 × (1 − |predicted − actual| / actual / 0.15))
final_points = base_points × confidence_multiplier × no_guide_multiplier

Multipliers stack:

  • Confidence: Low 0.5× / Medium 1.0× / High 1.25×. High-confidence multiplies your win andyour error — picking High when you’re unsure is a fast way to lose points.
  • No guide: 1.5× when the agent published no price guide. The hardest reads pay best.

Theoretical max on a single pick is 18.75 points (perfect prediction · High confidence · no guide). Realistic upper-end Saturday is in the 30s.

§2

What happens when an auction doesn't simply sell

Five-state resolution

Auctions don’t always settle on Saturday. Foresyte resolves every pick into one of five states:

● ScoredSold at or after auction with a disclosed price. Compared, scored, done.
◐ Pending — pass-inPassed in on Saturday. Resolves against the private sale within 30 days. Voids if it doesn't sell in that window — bonus pick credited to next week.
◑ Pending — withheldSold under hammer; price withheld. Resolves against NSW Valuer-General settled-sale data, typically 6–12 weeks later. Voids at 90 days if VG hasn't published.
○ Voided — withdrawnVendor pulled the listing. Not your fault. Bonus pick credited to next week.
○ Voided — sold beforeSold pre-auction; no public price moment. Bonus pick credited.

A bonus pick means you choose six of the next twenty instead of five. They compound across weeks, so a four-voided week credits four bonus picks the following Friday.

§3

Calibration tiers

MAPE over your last three months

Your calibration score is the mean absolute percentage error (MAPE) across your resolved predictions over the trailing three months. ELI5: average how far off you were each time, in percent. Lower is better.

Platinum<5% MAPE · 20+ resolved · 1.5× weight in crowd aggregates
Gold5–8% MAPE · 20+ resolved · 1.0× weight
Silver8–12% MAPE · 20+ resolved · 0.6× weight
UnratedBelow 20 resolved · 0.3× weight

You stay Unrateduntil you have at least 20 resolved predictions — calibration is unreliable below that. Tiers re-anchor at three months once we see real MAPE distribution; for now these thresholds are the founder’s informed guess at where “quite good” lives.

§4

Per-LGA expertise

Sydney is not one market

A player who has 20+ resolved predictions in a single LGA at under 6% MAPE earns domain expert status for that LGA. Their predictions in that LGA contribute at 1.5× to the suburb aggregate, on top of (or instead of) their overall tier weight.

The reason is honest: a Platinum-tier player whose 47 resolved predictions are all in the Inner West shouldn’t be weighted as an authority on Cronulla. The leaderboard badge stays based on your overall tier; the per-LGA tier sets your weight inside that suburb’s crowd aggregate.

§5

Recency weighting

Markets shift

Your MAPE is recency-weighted across the trailing 3-month window: last week’s resolved picks count more than picks from three months ago. Specific half-life is locked at 6 weeks at launch and revisits at month 3 once we see real volume.

Crowd aggregates apply the same recency weighting to predictions: a high-credibility tipster’s estimate from last Wednesday counts more than the same tipster’s estimate from January.

§6

The crowd estimate

Weighted median, not unweighted mean

For every scheduled Sydney auction, the crowd estimate is computed as a single credibility-weighted aggregate. The number is the weighted median — not the mean — of all predictions submitted before lock, with each prediction weighted by:

  • The player’s overall calibration tier (§3)
  • Their per-LGA expertise on this suburb (§4)
  • The confidence they assigned the pick (Low / Medium / High)
  • Recency half-life decay (§5)

Each crowd estimate carries a confidence pill — High, Moderate, or Low — derived from how many high-credibility contributors weighed in, how recent their predictions are, and how tight the IQR (interquartile range) is around the median.

IQR · ELI5

The width of the middle 50% of weighted predictions. A tight IQR means everyone roughly agrees; a wide IQR means the crowd is split. We’d rather publish “the crowd disagrees” honestly than fake consensus with a single number.

A High-confidence signal requires all three:

  • ≥100 high-credibility contributors
  • IQR width <8% of the weighted median
  • ≥1 Platinum-tier contributor with per-LGA expertise on the suburb
§7

Community estimates · withheld auctions

Crowd-sourced tips, weighted

About 8–15% of Sydney auctions sell with the price withheld. NSW Valuer-General catches up 6–12 weeks later. In the immediate window, players sometimes know — a neighbour mentioned theirs, a partner works in real estate, a mate was at the open. The Submit a tip CTA appears only on withheld auctions, and only after the auction has been called.

A tip is weighted by source category × the tipster’s credibility multiplier:

At auction1.0× — heard the price called
Personal connection (buyer or seller)0.8× — first-hand from a transaction party
Industry professional0.9× — verification requested; abused status is removed
Agent or industry source0.6× — second-hand, telephone-effect risk
Neighbour or friend0.4× — multi-hop, rounding, upward bias
Other / unsure0.3× — unknown provenance

Tipster credibility starts at 0.3× for everyone at launch. Tips back-test against NSW VG when settled-sale data lands ~90 days later; accurate tips lift the multiplier toward 1.5×, off ones drop it toward 0.2×.

A community estimate publishes only when all three thresholds are met:

  • ≥3 tips from distinct accounts
  • Credibility-weighted equivalent of ≥1.5 high-credibility tipsters
  • IQR width ≤12% of the credibility-weighted median

When all three are met, the published estimate is an A$100k-bucketed range — never a single number — with a confidence label:

High10+ tips · ≥2 high-credibility · IQR ≤6%
Moderate5–9 tips · ≥1 high-credibility · IQR ≤10%
Low3–4 tips, OR no high-credibility, OR IQR 8–12%

If a threshold fails, the surface says one of: “Limited tips received — no estimate yet”, “Tips received but inconsistent — investigating”, or “Awaiting NSW Valuer-General confirmation”. Honest is better than authoritative-sounding.

Thresholds revisit at month 6 against actual tip volume. If we’re consistently getting 10+ tips per withheld auction with tight IQRs, we tighten back toward 5-tips / 8%-IQR.

§8

What Foresyte is not

The framing

Every published number is what the calibrated crowd thinks — never a verdict on what a property is worth. That distinction matters legally (Australian Consumer Law s 18 governs “misleading or deceptive” conduct) and matters culturally: we’re a prediction game, not an automated valuation model. Foresyte doesn’t replace a buyer’s agent, a building inspection, or a strata report.

It’s also unconditionally free to play. No entry fees, no cash prizes, no peer-to-peer wagering, no “shares” in outcomes. Foresyte sits outside Australian gambling regulation by construction — and the verbs we use here (predict, reckon, call it) reflect that.


Working version. Thresholds and weights documented here re-anchor against real data at month 3 and month 6. Material changes will be dated in product-decisions.md.

Spec source: Foresyte_Reckon_v2_Spec.md §2 (scoring), §3 (states), §4 (calibration), §5 (tips), §6 (crowd estimate).

Numbers anywhere on Foresyte are what the calibrated crowd thinks — not what any property is worth.

Spotted something this page should explain better, or got a question we’ve dodged? Tell us — methodology is a working document, not scripture.

Open this week’s picks →