Skip to main content

Why prompt management matters

An agent’s quality depends heavily on how well its prompt is written. As you learn more about your data and how the agent behaves, you’ll want to iterate on the prompt — adding context, correcting misunderstandings, or refining instructions. Interact tracks every prompt change as a versioned history, so you can safely iterate without losing previous working configurations.

Prompt versioning

Every time you save an agent, Interact creates a new prompt version. Versions are numbered sequentially and timestamped.

Viewing version history

  1. Open an agent and click Edit agent
  2. Go to the Prompt history tab
  3. You’ll see a list of all saved versions with the date and a preview of what changed

Restoring a previous version

If an update makes the agent perform worse, you can roll back:
  1. In Prompt history, find the version you want to restore
  2. Click Restore — this loads that version’s prompt into the editor
  3. Click Save — a new version is created with the restored content
Restoring creates a new version, it doesn’t overwrite history. You can always see the full history of changes.

Learnings

Learnings are a way to teach your agent things without editing the prompt directly. When the agent makes a mistake or you spot something important — a column description, a business rule, a naming convention — you can save it as a learning. Learnings are injected into the agent’s context automatically at the start of each conversation, gradually improving accuracy over time without requiring manual prompt rewrites.

Saving a learning from chat

When the agent returns a response, hover over it to reveal the Save as learning button. Click it to open the learning editor. Write the learning in plain language — describe the correction or fact you want the agent to remember:
“The conversions column includes both view-through and click-through conversions. For ROAS calculations, use click_conversions only.”
“When the user asks about ‘last week’, use Monday–Sunday, not the rolling 7 days.”
Click Save. The learning is now stored and will be included in every future conversation with this agent.

Managing learnings

View and manage all learnings for an agent from the Learnings tab in the agent editor. You can:
  • Edit a learning to refine the wording
  • Delete a learning that is no longer accurate or relevant

Organisation-level vs. agent-level learnings

TypeScopeWhere to manage
Agent learningOnly this agentLearnings tab in the agent editor
Organisation contextAll agents in the workspaceSettings → Organisation
For facts that apply everywhere — standard metric definitions, company naming conventions, market codes — use the organisation context instead of duplicating them across agents.

Best practices

The orchestrator prompt should say in one sentence what the agent is for. Everything else follows from that. Vague prompts (“help with marketing data”) lead to inconsistent behaviour.
Don’t try to document every table upfront. Start chatting, and when the agent queries the wrong table or misinterprets a column, add a learning to correct it. Build context incrementally through real usage.
Dates are one of the most common sources of agent errors. If your reporting week starts on a Monday, say so. If “last month” means the full calendar month (not rolling 30 days), say so explicitly in the prompt.
Learnings are best for correcting specific facts: column meanings, ID formats, edge cases. Prompt text is best for behavioural rules: always show results by campaign, always present ROAS to 2 decimal places, always flag anomalies.