Predictive Cross-Channel Attribution
Click-time attribution is correct but incomplete — you’re making decisions on data that won’t be final for weeks. Predictive Attribution closes that window.
01.Click-Time Attribution Is Correct but Incomplete
The Cross-Channel Attribution page explains why click-time reporting is the right foundation: revenue is attributed to the date of the click, not the date of the conversion. This eliminates the seasonal distortions that make conversion-time ROAS unreliable.
But click-time reporting has a structural gap. Conversions arrive with a delay. A click today might not convert for days or weeks. Until that conversion arrives, the click looks like it produced nothing. Recent cohorts always look understated because their conversions are still in transit.
The delay is not small
For a typical retailer, only 40% of conversions happen within 7 days of the click. Another 21% arrive between days 8 and 30. The remaining 39% take 30 days or longer. When you look at last week's campaigns, you are seeing less than half the picture.
This creates an impossible tradeoff. You can wait until the data matures — but by then the optimization window has closed. Campaigns have been paused, budgets have shifted, and the feedback is stale. Or you can act on incomplete data — but every decision is based on numbers that will look different in three weeks.
Neither option works. Waiting loses you optimization time. Acting early gives you the wrong answer. The only way out is prediction: project what the final numbers will look like based on the maturation pattern of older, fully-converted cohorts.
Click-time attribution is correct but incomplete. You are making decisions on data that will not be final for weeks. Predictive Attribution closes that window.
02.How Prediction Works
Predictive Attribution works at the individual user level. For every visitor who has not yet converted, a machine learning model estimates the probability they will convert within the maturation window. These probabilities are summed across all unconverted users to produce a projected conversion total for each campaign, channel, and time period.
Maturation window
The system automatically calculates how long conversions take. It analyzes historical data to find the maturation window — the number of days it takes for 95% of conversions to be reported. This window varies by conversion type: early-funnel events like demo bookings might mature in 7 days, while purchases might take 60 or 90 days.
The maturation window determines how far the model looks forward. For a conversion that matures in 30 days, the model projects for the full 30 days. For the most recent 4–5 weeks of data, this projection fills the gap between what has been observed and what will ultimately arrive.
Maturation curve showing observed conversions vs projected final total for immature cohorts.
User-level scoring
The model scores each unconverted user based on their engagement signals: page views, return visits, active days, device type, geographic region, traffic source, and campaign characteristics. Users who visit once and never return quickly fade to near-zero probability. Users who come back, browse more pages, and engage deeper carry a higher projected conversion probability.
This user-level approach is what makes the projection sensitive to traffic quality. If a campaign shifts to higher-intent audiences, the model sees the behavioral difference in real time — it does not rely on historical averages.
Calibration
Raw model scores are calibrated so that the sum of projected conversions matches expected totals. The calibrator is trained on recent mature data — periods old enough that actual conversions are known. This ensures the projection is not just directionally correct but numerically accurate.
The model scores every unconverted user individually. The projection is not a statistical average — it reflects the actual behavior of the people on your site right now.
03.Adaptation Over Time
Conversion patterns are not static. The time between a click and a conversion shifts with seasons, promotions, and changes in audience mix. The projection must adapt — without manual intervention.
The model retrains on recent data
The model retrains monthly on the most recent mature data — periods where conversions have fully arrived. This means the model always reflects current conversion behavior. If users start converting faster during a sale, the next training cycle captures that shift. If a new campaign brings a different audience profile, the model learns from their behavior.
Campaign-level sensitivity
Because the model scores individual users based on their engagement, it naturally adapts to changes in traffic quality at the campaign level. A prospecting campaign that brings new, unfamiliar visitors will show lower projected conversion probabilities than a retargeting campaign bringing back engaged users. This distinction happens automatically — no manual campaign classification required.
Seasonal maturation shift. Peak season compresses the curve, pre-season stretches it.
Why this matters for decisions
Without adaptation, projections trained on Q3 data would systematically mispredict Q4 campaigns. The model's regular retraining on recent data ensures that seasonal and audience shifts are absorbed, not ignored.
04.What You See in Reports
Predictive Attribution adds projected metrics alongside observed ones in every attribution report. You always see both — what has been tracked and what the model expects.
Conversions (Incl. Projected)
The primary metric. It shows observed conversions plus projected conversions that have not happened yet but are statistically likely to occur. For example, if a campaign shows 10 observed conversions and the model projects 3 more, the metric shows 13.
For fully matured cohorts — clicks old enough that conversions have stabilized — the projected portion is zero. The metric equals observed conversions. For recent cohorts, the projected portion fills the gap. The further back you look, the less projection matters.
Derived metrics
CPA (Incl. Projected) and Conversion Rate (Incl. Projected) are calculated from the projected conversion total. This gives you cost efficiency and conversion rate that reflect what the campaign will deliver once conversions finish arriving — not just what has been recorded so far.
Coming soon
Projected value metrics — including projected ROAS — are on the roadmap. The same projection logic will extend to revenue, enabling projected ROAS alongside projected conversions.
How projection fades over time
The projection is largest for the most recent period and shrinks as conversions arrive. A campaign from last week might show 60% observed and 40% projected. The same campaign two weeks later shows 85% observed and 15% projected. By the time the maturation window closes, the projection is gone — only hard data remains.
The projection fills the gap in recent data. As conversions arrive, projection is replaced by reality. The total stays stable.
05.Validation
Predictions are only useful if they are accurate. Before enabling projections for any project, the system runs backtests — and you can rerun them at any time.
How backtesting works
Pick a period far enough in the past that the maturation window has fully closed. Look at the data as it appeared at the end of that period — say, 70 observed conversions and a projection of 100. Then look at today's actual total for the same period. If the mature number is close to 100, the projection was accurate.
This validation runs at multiple levels: overall, by ad platform, and by campaign. The projection is not just correct on average — it is validated where you make decisions.
Backtesting accuracy. Predicted vs actual conversions after full maturation.
Projection accuracy by campaign type
Not all campaigns project equally well. Retargeting campaigns convert quickly — most conversions arrive within days, leaving little to project. Prospecting campaigns convert slowly — the model fills a larger gap, and accuracy depends on how stable the conversion pattern is.
The backtesting makes this transparent. You can see which campaigns benefit most from projection and which already have near-complete data without it.
You do not have to trust the model. You can measure it — and every mature cohort adds to the track record.
06.See It Work
Predictive Attribution runs inside the SegmentStream MCP server. Ask for projected conversions in natural language and the server returns the full picture: observed conversions, projected conversions, and projected efficiency metrics for every channel — including cohorts that are still maturing.
What the report shows
The MCP server adds projection to the standard cross-channel output. For each channel and campaign: observed conversions, projected conversions, and projected CPA and conversion rate. Recent periods show the largest gap between observed and projected — that gap is what Predictive Attribution fills.
When projection matters most
The biggest impact is on recent data for slow-converting campaigns. A prospecting campaign from last week might show 5 observed conversions — but the model projects 12 more will arrive. Without projection, you might cut the campaign. With it, you see the full picture and keep optimizing.
For fast-converting campaigns like retargeting, the projection adds little — most conversions are already in. The system makes this distinction automatically, campaign by campaign.
This whitepaper is best experienced on desktop. It includes interactive demos and data tables that show how the technology works. Send yourself a link to read later.