How Foresyte does the maths.
Every number on a Foresyte page — your score, your tier, the crowd range, a community estimate — comes out of one of the formulas below. We’d rather be a bit boring and a lot transparent than dress this up.
How a pick is scored
The dollar mechanicEach pick is a dollar prediction. When the property sells, your prediction is compared to the sale price as a percentage of the actual. Within 15% of actual gets you points; beyond 15% gets you zero. The closer you call it, the more points you score.
base_points = max(0, 10 × (1 − |predicted − actual| / actual / 0.15)) final_points = base_points × confidence_multiplier × no_guide_multiplier
Multipliers stack:
- Confidence: Low 0.5× / Medium 1.0× / High 1.25×. High-confidence multiplies your win andyour error — picking High when you’re unsure is a fast way to lose points.
- No guide: 1.5× when the agent published no price guide. The hardest reads pay best.
Theoretical max on a single pick is 18.75 points (perfect prediction · High confidence · no guide). Realistic upper-end Saturday is in the 30s.
What happens when an auction doesn't simply sell
Five-state resolutionAuctions don’t always settle on Saturday. Foresyte resolves every pick into one of five states:
A bonus pick means you choose six of the next twenty instead of five. They compound across weeks, so a four-voided week credits four bonus picks the following Friday.
Calibration tiers
MAPE over your last three monthsYour calibration score is the mean absolute percentage error (MAPE) across your resolved predictions over the trailing three months. ELI5: average how far off you were each time, in percent. Lower is better.
You stay Unrateduntil you have at least 20 resolved predictions — calibration is unreliable below that. Tiers re-anchor at three months once we see real MAPE distribution; for now these thresholds are the founder’s informed guess at where “quite good” lives.
Per-LGA expertise
Sydney is not one marketA player who has 20+ resolved predictions in a single LGA at under 6% MAPE earns domain expert status for that LGA. Their predictions in that LGA contribute at 1.5× to the suburb aggregate, on top of (or instead of) their overall tier weight.
The reason is honest: a Platinum-tier player whose 47 resolved predictions are all in the Inner West shouldn’t be weighted as an authority on Cronulla. The leaderboard badge stays based on your overall tier; the per-LGA tier sets your weight inside that suburb’s crowd aggregate.
Recency weighting
Markets shiftYour MAPE is recency-weighted across the trailing 3-month window: last week’s resolved picks count more than picks from three months ago. Specific half-life is locked at 6 weeks at launch and revisits at month 3 once we see real volume.
Crowd aggregates apply the same recency weighting to predictions: a high-credibility tipster’s estimate from last Wednesday counts more than the same tipster’s estimate from January.
The crowd estimate
Weighted median, not unweighted meanFor every scheduled Sydney auction, the crowd estimate is computed as a single credibility-weighted aggregate. The number is the weighted median — not the mean — of all predictions submitted before lock, with each prediction weighted by:
- The player’s overall calibration tier (§3)
- Their per-LGA expertise on this suburb (§4)
- The confidence they assigned the pick (Low / Medium / High)
- Recency half-life decay (§5)
Each crowd estimate carries a confidence pill — High, Moderate, or Low — derived from how many high-credibility contributors weighed in, how recent their predictions are, and how tight the IQR (interquartile range) is around the median.
The width of the middle 50% of weighted predictions. A tight IQR means everyone roughly agrees; a wide IQR means the crowd is split. We’d rather publish “the crowd disagrees” honestly than fake consensus with a single number.
A High-confidence signal requires all three:
- ≥100 high-credibility contributors
- IQR width <8% of the weighted median
- ≥1 Platinum-tier contributor with per-LGA expertise on the suburb
Community estimates · withheld auctions
Crowd-sourced tips, weightedAbout 8–15% of Sydney auctions sell with the price withheld. NSW Valuer-General catches up 6–12 weeks later. In the immediate window, players sometimes know — a neighbour mentioned theirs, a partner works in real estate, a mate was at the open. The Submit a tip CTA appears only on withheld auctions, and only after the auction has been called.
A tip is weighted by source category × the tipster’s credibility multiplier:
Tipster credibility starts at 0.3× for everyone at launch. Tips back-test against NSW VG when settled-sale data lands ~90 days later; accurate tips lift the multiplier toward 1.5×, off ones drop it toward 0.2×.
A community estimate publishes only when all three thresholds are met:
- ≥3 tips from distinct accounts
- Credibility-weighted equivalent of ≥1.5 high-credibility tipsters
- IQR width ≤12% of the credibility-weighted median
When all three are met, the published estimate is an A$100k-bucketed range — never a single number — with a confidence label:
If a threshold fails, the surface says one of: “Limited tips received — no estimate yet”, “Tips received but inconsistent — investigating”, or “Awaiting NSW Valuer-General confirmation”. Honest is better than authoritative-sounding.
Thresholds revisit at month 6 against actual tip volume. If we’re consistently getting 10+ tips per withheld auction with tight IQRs, we tighten back toward 5-tips / 8%-IQR.
What Foresyte is not
The framingEvery published number is what the calibrated crowd thinks — never a verdict on what a property is worth. That distinction matters legally (Australian Consumer Law s 18 governs “misleading or deceptive” conduct) and matters culturally: we’re a prediction game, not an automated valuation model. Foresyte doesn’t replace a buyer’s agent, a building inspection, or a strata report.
It’s also unconditionally free to play. No entry fees, no cash prizes, no peer-to-peer wagering, no “shares” in outcomes. Foresyte sits outside Australian gambling regulation by construction — and the verbs we use here (predict, reckon, call it) reflect that.
Spotted something this page should explain better, or got a question we’ve dodged? Tell us — methodology is a working document, not scripture.
Open this week’s picks →