How it works

AICoven orchestrates multiple models with shared memory. The core ideas: Context Sandwich, routing with fallback, and encrypted provider keys.

Coven tutorial

A practical guide to setting up covens, roles, routing, and memory.

1) What is a Coven?

A Coven is a shared workspace for a project or team. You, your collaborators, and multiple AI roles all work in the same space.

  • Each coven contains threads (chats or tasks).
  • Each coven has roles (agents) with models and tools.
  • Shared memory stores project knowledge for all roles.
Covens listCoven threadsAgent roles

2) Connect your AI providers (BYOK)

AICoven doesn’t resell models. Bring your own keys for OpenAI, Anthropic, Google Gemini, and more.

  • Manage keys in Settings → Provider Keys.
  • Name keys clearly for teams and projects.
  • Keys are encrypted at rest and decrypted in memory only.
Provider keys screen

3) Connected apps

Connect GitHub, Google Drive, and more so agents can work where your files already live — always with explicit permission.

  • Connections can be revoked at any time.
  • Tokens are stored encrypted.
  • Access is scoped to approved repos and folders.
Connected apps screen

4) Budgets & usage

Track token usage and control spend across providers.

  • Set monthly budgets to stay predictable.
  • Review recent activity for model usage and costs.
  • Separate experiments from production.
Budgets and usage screen

5) Create roles (agents)

Roles are AI agents with names, prompts, models, and tools.

  • Give each role a clear name and emoji.
  • Choose a provider key + model for each role.
  • Attach tools like GitHub or Docs when relevant.
Add agent roleRole template selection

6) Threads & orchestration

Threads are conversations inside a coven. Each thread has a primary role, and you can pull in others with @mentions.

  • Set a primary role for default responses.
  • Type @RoleName to invite a specialist.
  • Use @mentions for a single specialist response.
Mention an agentMentioned agents replying

7) Models & routing

Each role has a preferred model. If it fails or rate‑limits, the router falls back to other healthy models you’ve enabled.

  • Pick models per role by task type.
  • Fallbacks keep workflows resilient.
  • Set a default model per role to control quality.
Select default model

8) Memory & context

Agents propose memory writes. You approve them, and they become reusable context across threads.

  • Scopes: agent, personal, or coven‑wide.
  • Search and pin important memories.
  • Delete stale memories to keep agents sharp.
Memory explorerMemory and agent roles

9) Practical tips

  • Name covens by team and project.
  • Use different provider keys for personal vs team work.
  • Keep roles narrowly scoped for better quality.
  • Review memory proposals regularly.

1) Context Sandwich

Requests are constructed as a layered bundle:

  1. System prompt with role configuration
  2. Retrieved memories scoped to the current thread/workspace
  3. Recent thread messages (windowed)
  4. Current user message

2) Orchestration with fallback

Each request can target a preferred model but includes provider‑aware fallback paths. If a provider is unavailable or you’ve hit a quota, the router transparently retries on a compatible model you’ve enabled.

Memory approvals

Proposed memories are queued for your review. Approved items become searchable context for future work; rejected items are not retained.

Encryption for provider keys

Provider API keys (OpenAI, Anthropic, Google, etc.) are stored using envelope encryption on the backend. When you add a key, the API generates a one ade data encryption key (DEK), encrypts your key with AES 256 GCM, then encrypts that DEK again with a long lived master key kept in the server environment. Only the encrypted blob and a key ID are stored in the database; the raw key is never written to disk or logs.

End to end chat protection

On Mac, iPad, and iPhone, the Swift client generates a unique 256 bit key per Apple account and stores it in the iCloud Keychain. That key never leaves your devices in plaintext. Before a chat request is sent, the app derives a Base64 representation of this key and sends it once per request in the X Coven Keyheader.

The API uses that key to optionally encrypt message content with AES GCM before persisting it, and stores only ciphertext plus a hash based fingerprint of the key for integrity checks. When the Swift app loads messages, it transparently decrypts anyis_encrypted content on device. If a key is missing or invalid, the system safely falls back rather than writing new encrypted data with the wrong key.