Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getinteract.com/llms.txt

Use this file to discover all available pages before exploring further.

Prerequisites

  • A Google BigQuery connector already set up in Settings → Connectors
  • Access to the client’s Google Cloud project (to create a bucket and assign IAM roles)
  • A trained Meridian model ready to export results

Step 1 — Create the GCS bucket

Create one Cloud Storage bucket in the client’s GCP project. This bucket is where the pipeline writes model result artifacts.
SettingValue
Bucket nameturntwo-mmm
LocationSame region as BigQuery (e.g. europe-west4)
Storage classStandard
Public accessUniform — not public

Step 2 — Grant the service account access

The BigQuery service account stored in Interact is also used to access GCS — no new credentials needed. Grant the service account the following role on the turntwo-mmm bucket:
roles/storage.objectAdmin
1

Find the service account email

In Interact, go to Settings → Connectors, open the BigQuery connector, and copy the service account email shown in the credentials.Alternatively, open the JSON credentials file — the client_email field contains the address.
2

Open bucket permissions in Google Cloud

Go to console.cloud.google.com, navigate to Cloud Storage → Buckets, click on turntwo-mmm, and open the Permissions tab.
3

Add the role

Click Grant access, paste the service account email, select the role Storage Object Admin (roles/storage.objectAdmin), and save.
If this role is missing, the Model Results page will fail to list or load any artifacts. The error will read “No runs shown” or a storage permission error in the browser console.

Step 3 — Configure MMM in Interact

Open Insights → Settings → MMM and configure one Data Book per market or model.

Data Book fields

FieldRequiredDescription
Label / CountryNoIdentifier shown in the UI dropdown (e.g. NL, Germany). Leave blank for single-market configs.
ProjectYesGoogle Cloud project ID that contains the BigQuery table
DatasetYesBigQuery dataset name
TableYesMain MMM input table (one row per period, channels as columns)
Final TableNoOptional adjusted or post-processed version of the input table
Date ColumnYesColumn name containing the date per row (e.g. date)
Excluded ColumnsNoComma-separated columns to hide from the Data Book view (e.g. country codes)

Model Results fields

FieldRequiredDescription
Results FolderNoGCS folder path within turntwo-mmm where model_*.json artifacts are stored. Defaults to {label}/results when a label is set, or results for single-book configs.
Use Validate Table after entering your BigQuery details to confirm Interact can connect and read the table before saving.
The service account must have roles/storage.objectAdmin on the turntwo-mmm bucket in the GCP project used by this connector — this note is also shown in the Model Results section of the settings form.

Step 4 — Run the pipeline and publish results

Run your Meridian Python pipeline. After training completes, the pipeline should export the results JSON to:
gs://turntwo-mmm/{results_folder}/model_{timestamp}.json
Where:
  • {results_folder} matches what you configured in Step 3 (or the default {label}/results)
  • {timestamp} is a string in YYYYMMDD_HHMMSS format, e.g. 20260511_112201
That’s all. The next time anyone opens the Model Results page, Interact will automatically discover the file and display the results. No registration step is needed.

Folder structure examples

ConfigDefault GCS path
Label = NLgs://turntwo-mmm/NL/results/model_20260511_112201.json
Label = DEgs://turntwo-mmm/DE/results/model_20260511_112201.json
No label (single market)gs://turntwo-mmm/results/model_20260511_112201.json
Custom folder overridegs://turntwo-mmm/clients/acme/nl/model_20260511_112201.json

Results artifact format

The pipeline must produce a valid JSON file. See the sections below for the full schema your pipeline should follow.
FieldTypeDescription
model_timestampstringYYYYMMDD_HHMMSS — must match the filename
data_book_labelstringMarket label; empty string "" for single-book configs
start_datestringISO date of the first observation
end_datestringISO date of the last observation
kpi_type"revenue" | "non_revenue"Type of KPI being modelled
kpi_labelstringHuman-readable KPI name shown in the UI
n_observationsintegerTotal number of rows in the training dataset
pipeline_versionstringSemver version of the pipeline (e.g. "1.2.0")
All fields are required.
FieldTypeDescription
scorefloatComposite health score (0–100)
overall_statusstring"PASS" or "FAIL"
convergence.max_rhatfloatHighest R-hat value across all parameters
convergence.max_parameterstringName of the parameter with the highest R-hat
convergence.statusstring"PASS" when max_rhat < 1.05
negative_baseline.probabilityfloatPosterior probability of baseline going negative
negative_baseline.statusstring"PASS" when probability ≈ 0
bayesian_ppp.valuefloatBayesian posterior predictive p-value (0–1)
bayesian_ppp.statusstring"PASS" when value is not extreme
goodness_of_fit.r2floatOverall R²
goodness_of_fit.mapefloatOverall MAPE (%)
goodness_of_fit.wmapefloatOverall wMAPE (%)
goodness_of_fit.r2_trainfloatTraining R²
goodness_of_fit.mape_trainfloatTraining MAPE (%)
goodness_of_fit.wmape_trainfloatTraining wMAPE (%)
goodness_of_fit.r2_testfloatHold-out R²
goodness_of_fit.mape_testfloatHold-out MAPE (%)
goodness_of_fit.wmape_testfloatHold-out wMAPE (%)
goodness_of_fit.statusstring"PASS" when fit metrics are acceptable
roi_consistency.statusstring | nullOptional check — null when not computed
roi_consistency.channelsobjectPer-channel detail if the check was run
prior_posterior_shift.statusstring | nullOptional check — null when not computed
prior_posterior_shift.no_shift_channelsstring[]Channels with no detected prior/posterior shift
All fields are required. Optional checks (roi_consistency, prior_posterior_shift) must still be present with status: null and empty collections when not computed.
FieldTypeDescription
r2_trainfloatR² on training set
r2_testfloatR² on hold-out test set
mape_trainfloatMean Absolute Percentage Error on training set (%)
mape_testfloatMAPE on test set (%)
wmape_trainfloatWeighted MAPE on training set (%)
wmape_testfloatWeighted MAPE on test set (%)
All fields are required.
All arrays must be the same length (n_observations).
FieldTypeDescription
datesstring[]ISO date strings, one per observation
actualfloat[]Observed KPI values
expectedfloat[]Model-fitted KPI values
baselinefloat[]Baseline (non-media) KPI contribution per period
The channels object contains one entry per channel (keys matching channels_order). Each entry:Required fields:
FieldTypeDescription
spendfloat | nullTotal spend over the modelling period; null when spend data is not available in the artifact
incremental_outcomefloatTotal incremental KPI attributed to this channel
pct_contributionfloatShare of total KPI (0–1)
roifloatReturn on investment (outcome / spend)
mroifloatMarginal ROI at the observed spend level
cpikfloatCost per incremental KPI unit
response_curve.spend_gridfloat[]Strictly increasing spend values
response_curve.outcome_gridfloat[]Outcome at each spend level (same length)
Optional fields:
FieldTypeDescription
adstock_alphafloat | nullGeometric decay rate (0–1)
adstock_max_laginteger | nullMemory window in periods
saturation_ec50float | nullHill saturation EC50 parameter
saturation_slopefloat | nullHill slope parameter
credible_interval.lowerfloat[]5th-percentile response curve
credible_interval.upperfloat[]95th-percentile response curve
Validation rules:
  • spend_grid must be strictly increasing
  • spend_grid.length === outcome_grid.length
  • If credible_interval is present, lower and upper must have the same length as spend_grid
  • Recommend 10–20 grid points for a smooth curve in the UI

Troubleshooting

SymptomLikely causeFix
Model Results page shows “No model results available”No model_*.json files found in the configured folderCheck the pipeline ran and the GCS path matches the configured Results Folder
Page shows a storage permission errorSA missing roles/storage.objectAdminAdd the role on the turntwo-mmm bucket in the client’s GCP project
”Failed to decrypt service account credentials”KMS issueCheck GOOGLE_PROJECT_ID and GOOGLE_CREDENTIALS environment variables on the server
Validation error: “spend_grid must be strictly increasing”Pipeline export bugFix the response curve export in the Python pipeline
”No BigQuery connector configured”Connector not set up or deletedRe-add a BigQuery connector in Settings → Connectors
Data Book shows no columnsWrong dataset/table referenceUse Validate Table in the MMM settings to confirm the table is accessible