AICoven Tutorial: Covens, Keys, Roles, Orchestration & Memory
Learn how covens, provider keys (BYOK), roles, threads, routing, and memory work together so you and your team can orchestrate multiple AI agents on real projects.
1. What is a Coven?
A Coven is a shared workspace for a project or team:
- You, your teammates, and multiple AI roles (agents) all work in the same space.
- Each coven has:
- Threads – conversations or tasks.
- Roles (agents) – personas wired to specific models and tools.
- Memory – shared project knowledge, with approvals and scopes.
Good naming pattern:
[Team] – [Project] (for example, Growth – Q2 Launch Plan, Infra – Incident Runbooks).
2. Add Your Provider Keys (BYOK)
Before covens can really shine, you connect your own AI providers.
2.1 Why keys?
AICoven doesn't resell models. You bring your own:
- OpenAI (GPT-4, GPT-4o, etc.)
- Anthropic (Claude)
- Google Gemini
- Others (Mistral, etc.)
This keeps billing, data retention, and model choice under your control.
2.2 How to add a key
- Go to Settings → Provider Keys (or the FTUE “Connect Your AI Providers” step).
- Tap Add Provider Key.
- Choose a provider (for example, OpenAI).
- Fill:
- Display Name – how you'll recognize this key (for example, OpenAI – Prod, Claude – Personal).
- API Key – paste your key from the provider's dashboard.
- Save. AICoven will:
- Store the key encrypted at rest.
- Run a quick health check and pull supported models.
2.3 Important security note
- Your keys are encrypted and only decrypted in memory when needed.
- If you lose the encryption keys, your data may not be recoverable. That's why the FTUE emphasizes key safety rather than “end-to-end” claims.
- Treat provider keys like passwords to your AI accounts.
2.4 Tips
- Use different keys (or separate accounts/projects) for:
- Personal experiments
- Team / production work
- Give each key a clear display name:Team – Use, Env – Project.
3. Creating Your First Coven
Once at least one provider key is healthy:
- From the Covens tab (or sidebar), tap Create Coven.
- Name it something clear, for example:
- Marketing – Website Refresh
- Research – RAG Prototype
- Optional: add a short description so teammates know the intent.
When a coven is created, AICoven can also create a default assistant role so you can chat immediately.
4. Adding & Managing Roles (Agents)
Roles are the core of orchestration: each role is an AI agent with:
- A name and emoji (identity)
- A model and provider key
- A system prompt (persona and instructions)
- Optional tools and budgets
4.1 Creating a role
- Inside a coven, open Roles (Agent Roles) for that coven.
- Tap Add Role.
- Configure:
- Name & Emoji – for example:
- 🧠 Strategist
- 🧾 Scribe
- 👩💻 Coder
- Model & Provider – pick from your healthy provider accounts.
- Prompt / Description – what it does and how (for example, “You are a product strategist focused on go-to-market plans …”).
- Temperature / Max tokens – control creativity and verbosity.
- Tools (if enabled) – GitHub, Google Docs, Sheets, Calendar, etc.
- Budget / usage hints – if exposed, keep expensive models reserved for critical roles.
- Name & Emoji – for example:
- Save. The role becomes available in the role picker in threads, @mentions and Roundtable flows, and memory scoping (agent-specific memory).
4.2 Role design tips
- Start with three core roles per coven:
- Planner / Strategist – plans, roadmaps, specs.
- Executor / Coder / Analyst – does concrete work.
- Scribe / Editor – cleans up, documents, and organizes.
- Use cheaper models for exploratory chat, brainstorming, and draft notes.
- Use premium models for final copy, coding tasks, complex reasoning, and critical analysis.
5. Threads, Conversations & Orchestration
5.1 Threads
A thread is a conversation inside a coven:
- Often maps to a task, spec, or sub-project.
- Can have a primary role (who owns the conversation).
- Shows history, memory interactions, and model usage.
- Inside a coven, tap New Thread.
- Pick a role (or use the default assistant).
- Optionally set auto-route / default behaviors.
5.2 Primary role vs @mentions
- Primary role: the default agent responding in that thread.
- @mentions: bring in other agents ad-hoc without changing the primary.
Examples:
- @Coder to implement a change while Strategist remains primary.
- @Scribe to summarize a long discussion.
5.3 Agents talking to each other
The system is designed so agents can:
- See the same thread context (subject to permissions).
- Respond to each other's outputs when you @mention multiple roles or use Roundtable or Forward to Role.
Typical patterns:
- Roundtable: ask 2–3 roles the same question, compare answers side-by-side, and optionally reconcile into a single plan.
- Forward to Role / Handoff: “@Planner, finalize this plan and hand it off to @Coder.” The platform creates a linked sub-thread so Coder continues with context.
6. Model Availability & Routing
Behind the scenes, each role knows which provider account it uses, which models are available, and any optional fallback routes.
6.1 Basic routing
- The system checks the role's preferred model (for example, claude-3.5-sonnet).
- It also knows which provider account / API key to use.
- If the call fails or hits a budget cap, it can fall back to a secondary model (for example, gpt-4.1-mini) and surface this as a breadcrumb in the UI (for example, Claude → GPT).
6.2 Tips for routing
- Give critical roles at least one backup model on another provider (for example, Planner: Claude → GPT-4o, Coder: GPT-4.1 → Claude).
- Use separate provider accounts for experimental vs production usage.
- Watch the model badges in chat to understand which models actually executed.
7. Memory: What Agents Remember
AICoven uses a memory fabric with clear scopes:
- Agent memory – private to a role (its preferences, working style).
- User memory – personal notes about you.
- Coven memory – project-shared knowledge (requirements, docs, decisions).
7.1 How memory is written
- During chat, agents propose memory events (for example, “We should remember X”).
- These go into memory_events as proposed.
- You (or a policy) approve them from memory chips in the thread or from the Memory Explorer UI.
- Approved entries become searchable memory chunks.
7.2 How memory is read
- When you send a message, the memory service looks at the thread, coven, user, and agent scopes and searches for top-k relevant chunks (vector + keyword + tags).
- The system builds a context sandwich: system prompt (role config), retrieved memory snippets, recent thread messages, and your new message.
- This gives each reply grounded context without you pasting huge prompts every time.
7.3 Memory tips
- Use tags (if available) like #requirements, #decisions, #risk.
- Regularly review proposed writes in Memory Explorer – approve the truly reusable pieces and reject ephemeral or logistical stuff.
- Keep agent prompts short; let memory carry project details.
8. Putting It All Together: Example Flow
Here's a concrete “happy path” using all the pieces:
- Connect providers
- Add an OpenAI key (OpenAI – Team) and a Claude key (Claude – Research).
- Create a coven
- Growth – Q2 Launch.
- Add roles
- 🧠 Strategist → Claude Sonnet, tools: gdocs.
- 👩💻 Coder → GPT-4.1, tools: GitHub.
- 🧾 Scribe → cheaper model, tools: gdocs, gdrive.
- Start a thread
- Thread: Launch Plan – Website Revamp.
- Primary role: 🧠 Strategist.
- Orchestrate
- Ask Strategist for a launch plan.
- Roundtable with @Strategist + @Coder for risk analysis.
- Forward the final plan to @Scribe to create docs.
- Use memory
- Approve key decisions and requirements into coven memory.
- Next time you start a thread in that coven, agents automatically recall the constraints (budget, target audience, tone) and route work to the right models with context.