Use this file to discover all available pages before exploring further.
The Model Results page displays the output of your most recently trained Meridian model. Results are loaded automatically — as soon as the pipeline writes a new artifact to GCS, it will appear the next time the page is opened.
Interact scans the configured GCS folder on every page load and loads the latest model_*.json file. No manual registration is required. If your pipeline has published multiple runs, a run selector appears in the top-right corner so you can browse historical results.The folder scanned is set in Insights → Settings → MMM under “Results Folder”. The default is {label}/results for multi-market configs, or results for single-market.
Expand the Model Health section to see a composite score and individual Meridian diagnostic checks. These tell you how reliably the model was estimated.
Diagnostic
What it measures
Threshold
Score
Overall quality score aggregating all checks
0–100 (higher is better)
Convergence (R-hat max)
Convergence of MCMC chains — the parameter name with the highest R-hat is shown
< 1.05 to pass
Negative baseline
Posterior probability that the baseline contribution went below zero
≈ 0 to pass
Bayesian PPP
Bayesian posterior predictive p-value
Not too extreme (e.g. 0.05–0.95)
Goodness of fit
R², MAPE, and wMAPE on overall, training, and hold-out data
Within acceptable ranges
ROI consistency
Optional check on ROI plausibility across channels
null when not computed
Prior/posterior shift
Optional check for unexpected prior influence on posteriors
null when not computed
A failing health check doesn’t necessarily mean the results are wrong, but they should be treated with caution. Common causes are insufficient warm-up steps, poorly specified priors, or a very short time series. Consult your modeller before acting on results from a failing run.
This section shows how well the model predicts outcomes on data it did not see during training (the hold-out test set).
Metric
Description
R² (test)
Proportion of variance explained on the test set. Values above 0.8 are generally strong.
MAPE (test)
Mean Absolute Percentage Error on the test set. Lower is better.
WMAPE (test)
Weighted MAPE — less sensitive to periods with very small actual values.
R² (train)
Same metrics on the training set, shown for comparison.
A large gap between training and test R² can indicate overfitting. A MAPE above ~25% on the test set suggests the model has limited predictive power on unseen data.
The table lists every media channel with its aggregate results over the modelling period:
Column
Description
Spend
Total spend attributed to this channel
Incremental outcome
Total KPI units driven by this channel’s spend
Contribution %
Share of total KPI attributed to this channel
ROI
Return on investment (incremental outcome / spend)
mROI
Marginal ROI — the incremental return at the channel’s current spend level
CPiK
Cost per incremental KPI unit
Adstock α
Geometric decay rate (carry-over effect); shown when available
Saturation EC50
Spend level at which 50% of maximum saturation is reached; shown when available
mROI is the most actionable metric for budget decisions. A channel with mROI > 1.0 can absorb more budget profitably; a channel with mROI < 1.0 is at or past its saturation point.
Expand the Response Curves section to see an S-curve (or diminishing-returns curve) for each channel. The curve shows the predicted incremental outcome at different spend levels, holding all other channels constant.How to read the chart:
The dot marks the channel’s actual spend level from the modelling period
The shaded band (when available) shows the 90% credible interval around the curve
A steep slope at the current spend level = high marginal returns; a flat slope = near saturation
Use response curves to compare channels by marginal efficiency and to identify where additional spend has the highest expected return.
If your pipeline has published multiple runs to the same GCS folder, a run selector dropdown appears in the top-right corner of the results page. Select any previous run to load its artifact.Runs are labelled by their timestamp (the date and time the pipeline exported the file). The most recent run is always loaded by default.