Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getinteract.com/llms.txt

Use this file to discover all available pages before exploring further.

The Model Results page displays the output of your most recently trained Meridian model. Results are loaded automatically — as soon as the pipeline writes a new artifact to GCS, it will appear the next time the page is opened.

How results are discovered

Interact scans the configured GCS folder on every page load and loads the latest model_*.json file. No manual registration is required. If your pipeline has published multiple runs, a run selector appears in the top-right corner so you can browse historical results. The folder scanned is set in Insights → Settings → MMM under “Results Folder”. The default is {label}/results for multi-market configs, or results for single-market.

Page sections

Summary strip

The top of the page shows a quick summary of the run:
ItemDescription
KPIThe outcome variable the model was trained on (e.g. revenue_eur)
Date rangeStart and end dates of the training dataset
ChannelsNumber of media channels included in the model
ObservationsTotal number of data points (weeks or days)
Model healthHealth score (0–100) and pass/fail status from Meridian diagnostic checks
Predictive accuracyTest-set R² at a glance

Model health

Expand the Model Health section to see a composite score and individual Meridian diagnostic checks. These tell you how reliably the model was estimated.
DiagnosticWhat it measuresThreshold
ScoreOverall quality score aggregating all checks0–100 (higher is better)
Convergence (R-hat max)Convergence of MCMC chains — the parameter name with the highest R-hat is shown< 1.05 to pass
Negative baselinePosterior probability that the baseline contribution went below zero≈ 0 to pass
Bayesian PPPBayesian posterior predictive p-valueNot too extreme (e.g. 0.05–0.95)
Goodness of fitR², MAPE, and wMAPE on overall, training, and hold-out dataWithin acceptable ranges
ROI consistencyOptional check on ROI plausibility across channelsnull when not computed
Prior/posterior shiftOptional check for unexpected prior influence on posteriorsnull when not computed
A failing health check doesn’t necessarily mean the results are wrong, but they should be treated with caution. Common causes are insufficient warm-up steps, poorly specified priors, or a very short time series. Consult your modeller before acting on results from a failing run.

Predictive accuracy

This section shows how well the model predicts outcomes on data it did not see during training (the hold-out test set).
MetricDescription
R² (test)Proportion of variance explained on the test set. Values above 0.8 are generally strong.
MAPE (test)Mean Absolute Percentage Error on the test set. Lower is better.
WMAPE (test)Weighted MAPE — less sensitive to periods with very small actual values.
R² (train)Same metrics on the training set, shown for comparison.
A large gap between training and test R² can indicate overfitting. A MAPE above ~25% on the test set suggests the model has limited predictive power on unseen data.

Actual vs. fitted chart

The time-series chart shows three lines across the full modelling period:
  • Actual — the observed KPI values
  • Fitted — the model’s predictions
  • Baseline — the portion of the KPI explained by non-media factors (seasonality, organic, etc.)
Use this chart to visually assess fit quality. Large, systematic gaps between actual and fitted suggest a structural misfit in the model.

Channel contributions table

The table lists every media channel with its aggregate results over the modelling period:
ColumnDescription
SpendTotal spend attributed to this channel
Incremental outcomeTotal KPI units driven by this channel’s spend
Contribution %Share of total KPI attributed to this channel
ROIReturn on investment (incremental outcome / spend)
mROIMarginal ROI — the incremental return at the channel’s current spend level
CPiKCost per incremental KPI unit
Adstock αGeometric decay rate (carry-over effect); shown when available
Saturation EC50Spend level at which 50% of maximum saturation is reached; shown when available
mROI is the most actionable metric for budget decisions. A channel with mROI > 1.0 can absorb more budget profitably; a channel with mROI < 1.0 is at or past its saturation point.

Response curves

Expand the Response Curves section to see an S-curve (or diminishing-returns curve) for each channel. The curve shows the predicted incremental outcome at different spend levels, holding all other channels constant. How to read the chart:
  • The dot marks the channel’s actual spend level from the modelling period
  • The shaded band (when available) shows the 90% credible interval around the curve
  • A steep slope at the current spend level = high marginal returns; a flat slope = near saturation
Use response curves to compare channels by marginal efficiency and to identify where additional spend has the highest expected return.

Switching between runs

If your pipeline has published multiple runs to the same GCS folder, a run selector dropdown appears in the top-right corner of the results page. Select any previous run to load its artifact. Runs are labelled by their timestamp (the date and time the pipeline exported the file). The most recent run is always loaded by default.

Settings reference

SettingWhere to configureWhat it affects
Results FolderInsights → Settings → MMMWhich GCS folder is scanned for artifacts
LabelInsights → Settings → MMMWhich data book’s results folder to use in multi-market configs

MMM setup

Configure the GCS folder and permissions for model result artifacts.