Why prompt management matters
An agent’s quality depends heavily on how well its prompt is written. As you learn more about your data and how the agent behaves, you’ll want to iterate on the prompt — adding context, correcting misunderstandings, or refining instructions. Interact tracks every prompt change as a versioned history, so you can safely iterate without losing previous working configurations.Prompt versioning
Every time you save an agent, Interact creates a new prompt version. Versions are numbered sequentially and timestamped.Viewing version history
- Open an agent and click Edit agent
- Go to the Prompt history tab
- You’ll see a list of all saved versions with the date and a preview of what changed
Restoring a previous version
If an update makes the agent perform worse, you can roll back:- In Prompt history, find the version you want to restore
- Click Restore — this loads that version’s prompt into the editor
- Click Save — a new version is created with the restored content
Restoring creates a new version, it doesn’t overwrite history. You can always see the full history of changes.
Learnings
Learnings are a way to teach your agent things without editing the prompt directly. When the agent makes a mistake or you spot something important — a column description, a business rule, a naming convention — you can save it as a learning. Learnings are injected into the agent’s context automatically at the start of each conversation, gradually improving accuracy over time without requiring manual prompt rewrites.Saving a learning from chat
When the agent returns a response, hover over it to reveal the Save as learning button. Click it to open the learning editor. Write the learning in plain language — describe the correction or fact you want the agent to remember:“Theconversionscolumn includes both view-through and click-through conversions. For ROAS calculations, useclick_conversionsonly.”
“When the user asks about ‘last week’, use Monday–Sunday, not the rolling 7 days.”Click Save. The learning is now stored and will be included in every future conversation with this agent.
Managing learnings
View and manage all learnings for an agent from the Learnings tab in the agent editor. You can:- Edit a learning to refine the wording
- Delete a learning that is no longer accurate or relevant
Organisation-level vs. agent-level learnings
| Type | Scope | Where to manage |
|---|---|---|
| Agent learning | Only this agent | Learnings tab in the agent editor |
| Organisation context | All agents in the workspace | Settings → Organisation |
Best practices
Start with a clear agent purpose
Start with a clear agent purpose
The orchestrator prompt should say in one sentence what the agent is for. Everything else follows from that. Vague prompts (“help with marketing data”) lead to inconsistent behaviour.
Add table and column descriptions iteratively
Add table and column descriptions iteratively
Don’t try to document every table upfront. Start chatting, and when the agent queries the wrong table or misinterprets a column, add a learning to correct it. Build context incrementally through real usage.
Be explicit about date logic
Be explicit about date logic
Dates are one of the most common sources of agent errors. If your reporting week starts on a Monday, say so. If “last month” means the full calendar month (not rolling 30 days), say so explicitly in the prompt.
Use learnings for corrections, prompts for rules
Use learnings for corrections, prompts for rules
Learnings are best for correcting specific facts: column meanings, ID formats, edge cases.
Prompt text is best for behavioural rules: always show results by campaign, always present ROAS to 2 decimal places, always flag anomalies.