Documentation Index
Fetch the complete documentation index at: https://docs.getinteract.com/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A Google BigQuery connector already set up in Settings → Connectors
- Access to the client’s Google Cloud project (to create a bucket and assign IAM roles)
- A trained Meridian model ready to export results
Step 1 — Create the GCS bucket
Create one Cloud Storage bucket in the client’s GCP project. This bucket is where the pipeline writes model result artifacts.| Setting | Value |
|---|---|
| Bucket name | turntwo-mmm |
| Location | Same region as BigQuery (e.g. europe-west4) |
| Storage class | Standard |
| Public access | Uniform — not public |
Step 2 — Grant the service account access
The BigQuery service account stored in Interact is also used to access GCS — no new credentials needed. Grant the service account the following role on theturntwo-mmm bucket:
Find the service account email
In Interact, go to Settings → Connectors, open the BigQuery connector, and copy the service account email shown in the credentials.Alternatively, open the JSON credentials file — the
client_email field contains the address.Open bucket permissions in Google Cloud
Go to console.cloud.google.com, navigate to Cloud Storage → Buckets, click on
turntwo-mmm, and open the Permissions tab.Step 3 — Configure MMM in Interact
Open Insights → Settings → MMM and configure one Data Book per market or model.Data Book fields
| Field | Required | Description |
|---|---|---|
| Label / Country | No | Identifier shown in the UI dropdown (e.g. NL, Germany). Leave blank for single-market configs. |
| Project | Yes | Google Cloud project ID that contains the BigQuery table |
| Dataset | Yes | BigQuery dataset name |
| Table | Yes | Main MMM input table (one row per period, channels as columns) |
| Final Table | No | Optional adjusted or post-processed version of the input table |
| Date Column | Yes | Column name containing the date per row (e.g. date) |
| Excluded Columns | No | Comma-separated columns to hide from the Data Book view (e.g. country codes) |
Model Results fields
| Field | Required | Description |
|---|---|---|
| Results Folder | No | GCS folder path within turntwo-mmm where model_*.json artifacts are stored. Defaults to {label}/results when a label is set, or results for single-book configs. |
The service account must have
roles/storage.objectAdmin on the turntwo-mmm bucket in the GCP project used by this connector — this note is also shown in the Model Results section of the settings form.Step 4 — Run the pipeline and publish results
Run your Meridian Python pipeline. After training completes, the pipeline should export the results JSON to:{results_folder}matches what you configured in Step 3 (or the default{label}/results){timestamp}is a string inYYYYMMDD_HHMMSSformat, e.g.20260511_112201
Folder structure examples
| Config | Default GCS path |
|---|---|
Label = NL | gs://turntwo-mmm/NL/results/model_20260511_112201.json |
Label = DE | gs://turntwo-mmm/DE/results/model_20260511_112201.json |
| No label (single market) | gs://turntwo-mmm/results/model_20260511_112201.json |
| Custom folder override | gs://turntwo-mmm/clients/acme/nl/model_20260511_112201.json |
Results artifact format
The pipeline must produce a valid JSON file. See the sections below for the full schema your pipeline should follow.meta — run metadata
meta — run metadata
| Field | Type | Description |
|---|---|---|
model_timestamp | string | YYYYMMDD_HHMMSS — must match the filename |
data_book_label | string | Market label; empty string "" for single-book configs |
start_date | string | ISO date of the first observation |
end_date | string | ISO date of the last observation |
kpi_type | "revenue" | "non_revenue" | Type of KPI being modelled |
kpi_label | string | Human-readable KPI name shown in the UI |
n_observations | integer | Total number of rows in the training dataset |
pipeline_version | string | Semver version of the pipeline (e.g. "1.2.0") |
health — Meridian diagnostic checks
health — Meridian diagnostic checks
| Field | Type | Description |
|---|---|---|
score | float | Composite health score (0–100) |
overall_status | string | "PASS" or "FAIL" |
convergence.max_rhat | float | Highest R-hat value across all parameters |
convergence.max_parameter | string | Name of the parameter with the highest R-hat |
convergence.status | string | "PASS" when max_rhat < 1.05 |
negative_baseline.probability | float | Posterior probability of baseline going negative |
negative_baseline.status | string | "PASS" when probability ≈ 0 |
bayesian_ppp.value | float | Bayesian posterior predictive p-value (0–1) |
bayesian_ppp.status | string | "PASS" when value is not extreme |
goodness_of_fit.r2 | float | Overall R² |
goodness_of_fit.mape | float | Overall MAPE (%) |
goodness_of_fit.wmape | float | Overall wMAPE (%) |
goodness_of_fit.r2_train | float | Training R² |
goodness_of_fit.mape_train | float | Training MAPE (%) |
goodness_of_fit.wmape_train | float | Training wMAPE (%) |
goodness_of_fit.r2_test | float | Hold-out R² |
goodness_of_fit.mape_test | float | Hold-out MAPE (%) |
goodness_of_fit.wmape_test | float | Hold-out wMAPE (%) |
goodness_of_fit.status | string | "PASS" when fit metrics are acceptable |
roi_consistency.status | string | null | Optional check — null when not computed |
roi_consistency.channels | object | Per-channel detail if the check was run |
prior_posterior_shift.status | string | null | Optional check — null when not computed |
prior_posterior_shift.no_shift_channels | string[] | Channels with no detected prior/posterior shift |
roi_consistency, prior_posterior_shift) must still be present with status: null and empty collections when not computed.predictive_accuracy — hold-out metrics
predictive_accuracy — hold-out metrics
| Field | Type | Description |
|---|---|---|
r2_train | float | R² on training set |
r2_test | float | R² on hold-out test set |
mape_train | float | Mean Absolute Percentage Error on training set (%) |
mape_test | float | MAPE on test set (%) |
wmape_train | float | Weighted MAPE on training set (%) |
wmape_test | float | Weighted MAPE on test set (%) |
fit_series — time-series fit data
fit_series — time-series fit data
All arrays must be the same length (
n_observations).| Field | Type | Description |
|---|---|---|
dates | string[] | ISO date strings, one per observation |
actual | float[] | Observed KPI values |
expected | float[] | Model-fitted KPI values |
baseline | float[] | Baseline (non-media) KPI contribution per period |
channels — per-channel results
channels — per-channel results
The
Optional fields:
Validation rules:
channels object contains one entry per channel (keys matching channels_order). Each entry:Required fields:| Field | Type | Description |
|---|---|---|
spend | float | null | Total spend over the modelling period; null when spend data is not available in the artifact |
incremental_outcome | float | Total incremental KPI attributed to this channel |
pct_contribution | float | Share of total KPI (0–1) |
roi | float | Return on investment (outcome / spend) |
mroi | float | Marginal ROI at the observed spend level |
cpik | float | Cost per incremental KPI unit |
response_curve.spend_grid | float[] | Strictly increasing spend values |
response_curve.outcome_grid | float[] | Outcome at each spend level (same length) |
| Field | Type | Description |
|---|---|---|
adstock_alpha | float | null | Geometric decay rate (0–1) |
adstock_max_lag | integer | null | Memory window in periods |
saturation_ec50 | float | null | Hill saturation EC50 parameter |
saturation_slope | float | null | Hill slope parameter |
credible_interval.lower | float[] | 5th-percentile response curve |
credible_interval.upper | float[] | 95th-percentile response curve |
spend_gridmust be strictly increasingspend_grid.length === outcome_grid.length- If
credible_intervalis present,loweranduppermust have the same length asspend_grid - Recommend 10–20 grid points for a smooth curve in the UI
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| Model Results page shows “No model results available” | No model_*.json files found in the configured folder | Check the pipeline ran and the GCS path matches the configured Results Folder |
| Page shows a storage permission error | SA missing roles/storage.objectAdmin | Add the role on the turntwo-mmm bucket in the client’s GCP project |
| ”Failed to decrypt service account credentials” | KMS issue | Check GOOGLE_PROJECT_ID and GOOGLE_CREDENTIALS environment variables on the server |
| Validation error: “spend_grid must be strictly increasing” | Pipeline export bug | Fix the response curve export in the Python pipeline |
| ”No BigQuery connector configured” | Connector not set up or deleted | Re-add a BigQuery connector in Settings → Connectors |
| Data Book shows no columns | Wrong dataset/table reference | Use Validate Table in the MMM settings to confirm the table is accessible |