Flotilla: The Docker of AI Agents.

Stop duct-taping random chats. Flotilla is the orchestration layer that bootstraps your autonomous fleet with shared memory, vault-first security, and a unified Kanban bridge on your own hardware.

Install: npx create-flotilla

One command to deploy a professional-grade engineering fleet.

Bootstrap a multi-model team with shared operating memory, secure secret delivery, and a management layer your humans can actually read.

Persistent StateShared memory, standups, and architectural context survive across sessions and models.
Vault-First SecurityInfisical-based secret delivery with no `.env` files leaking into agent context.
Shared LearningLessons learned are retained, reviewed, and propagated so agents stop repeating the same errors.
Always-On OperationsPocketBase, dispatcher heartbeats, and Open Claw pathways keep the fleet operational beyond a single chat session.

The operating system for autonomous engineering teams

Flotilla standardizes the bootstrap, security, and coordination layer so your agents stop behaving like isolated chat tabs and start working like a professional team.

  • Shared Consciousness: Centralized Markdown memory files link coding conventions, reporting rules, and project states.
  • Resilient Architecture: Model-agnostic by design. Use Claude for logic, Gemini for context, and Codex for speed.
  • GitHub Kanban Orchestration: Agents pull tickets from your board and report progress asynchronously.

Mission Control for your AI Workforce

A centralized, human-readable management plane where you monitor, direct, and teach your workforce. The snapshots below are meant to give visitors a feel for the UI at a glance.

Acme Engineering
Fleet Hub
Team
Projects
Rules
Memory
Standups
Fleet Crew
Agentic Team
Role-specific agents working in sync with visible environments, responsibilities, and operating context.
L

Logic Agent

Lead Implementation
A

Architecture Agent

System Design
Q

QA Agent

Verification
C

Commercial Agent

Lead Ops
Acme Engineering
Fleet Hub
Team
Projects
Rules
Memory
Standups
Knowledge Base
Memory Tree
Centralized project context and architectural blueprints, linked directly to the shared markdown memory system.
Global Rules
Team RulesCommit, Kanban, and shared-memory protocols.
Mission ControlThe operational source of truth for all agents.
Project Context
ArchitectureComponent maps and system boundaries.
Lessons LearnedShared notes to avoid repeating mistakes.
Acme Engineering
Fleet Hub
Team
Projects
Rules
Memory
Standups
Daily Pulse
Standups
Session logs that show what each agent finished, what changed, and what still needs human attention.
2026-03-16
2026-03-15
Current Session Log
Done
  • Homepage contact form wired into Google Sheets.
  • Vault-first security layer documentation complete.
Today
  • Review showcase section and UI redesign.
  • Validate production deployment paths.
Acme Engineering
Fleet Hub
Team
Projects
Rules
Memory
Standups
Execution Board
Kanban Bridge
Tickets stay synchronized across the human board and the agent fleet.
In Progress
#47 Homepage RedesignHero rewrite and CTA cleanup.
#45 Telegram BridgeTwo-way command channel sync.
Acme Engineering
Fleet Hub
Team
Projects
Rules
Memory
Standups
Evolutionary Memory
Lessons Ledger
The fleet keeps a running memory of architectural decisions and operating lessons.
Approved Insights
Auth Cookie ParsingSplit signed session data correctly.
Regional API DomainsExplicit Infisical domains avoid failures.

Most AI deployments break where engineering teams actually live.

Standard AI deployments fail because they lose context, drift across sessions, and create unpredictable operating costs.

1

Memory Loss

Stateless agents forget architecture and prior decisions once the tab closes, so you have to tell them again how to do things.

2

Runaway API Token Costs

Usage-based token billing can create unexpected large bills and unpredictable economics for engineering teams.

3

Copy-Paste Coordination

Teams waste time relaying tasks, answers, and context manually between tabs because the agents have no shared command layer.

4

Evolutionary Leak

Agents don't learn from mistakes. Without a "Lessons Learned" ledger, your fleet repeats the same errors every session.

5

Security Trap

Hardcoding secrets in .env files is a massive risk. Most setups lack a professional, vault-backed security layer.

Resilient multi-agent infrastructure with human-readable control.

The Fleet Hub is a management layer for coordinated agents: shared memory, vault-backed secret access, role specialization, and explicit reporting.

1

Memory Tree

Architecture, context, and project decisions stay versioned and accessible across sessions.

2

Mission-Control Sync

Agents start from the same source of truth and synchronize before they act.

3

Coordinated Execution

Stop copy-pasting and start coordinating through shared tickets, inboxes, standups, and human-readable state.

4

Evolutionary Learning

Lessons learned can be reviewed, approved, and pushed back into future agent behavior.

5

Vault-First Secrets

Secrets are fetched on demand through Infisical scripts and injected in-memory.

Standardized agent orchestration, not improvised chat ops.

Flotilla is strongest when the coordination layer is explicit. The package gives teams a repeatable operating model instead of a loose pile of prompts, tabs, and undocumented rituals.

MISSION_CONTROL.md

The shared cognitive layer. Every agent re-syncs against the same mission context, rules, ticket state, and architectural source of truth before acting.

Lessons Learned

Approved memory entries become reusable operating knowledge, so field fixes and architectural constraints survive the next session and the next model.

Kanban Bridge

GitHub and dashboard work stay aligned. Humans can see ticket state, and agents can move work without relying on fragile copy-paste handoffs.

Vault-First Security

Secrets are fetched on demand through Infisical workflows. No hardcoded `.env` files, no credential sprawl in chat history, and no leaked keys in memory ledgers.

Multi-Model Orchestration

Coordinate Claude, Gemini, Mistral, and Codex in parallel with shared conventions for handoffs, standups, lessons learned, and task routing.

Choice and predictable economics.

Own your AI stack in cost, in model choice, and in operating discipline.

Cost

Predictable cost by design

Move away from runaway token billing toward fixed-cost operating models with no mid-project cost surprises.

Choice

Freedom to choose models

Stay model-agnostic. Use cloud models where they fit and run local models where privacy, latency, or sovereignty matter.

Discipline

Swiss precision

Based in Zurich, Big Bear Engineering focuses on production-grade execution, clean architecture, and disciplined workflows instead of AI hype.

See the fleet in real customer-facing workflows.

These live demos show actual implementations built on top of the Fleet Hub for different customer types.

What you need before deploying the fleet.

The platform works best when the operational prerequisites are already in place.

Required inputs

  • GitHub projects and repositories where agents can pick tickets and report work.
  • Licensed agents or model subscriptions for the roles you want to run.
  • A secure secret-management path such as Infisical for production workflows.

Optional always-on layer

  • Open Claw if you want a fleet that never sleeps and can be steered remotely.
  • Telegram as the mobile control surface for approvals, nudges, and human-in-the-loop events.

PocketBase Integration

  • Task, comment, heartbeat, and lesson collections for always-on coordination.
  • Human-readable operational state instead of hidden prompt state.

The Dispatcher

  • Python heartbeat loop pre-configured for macOS launchd.
  • Turns standups, reviews, and waiting-human events into a disciplined workflow.

Telegram Listener

  • Mobile command-and-control path for the fleet manager.
  • Keeps the human operator in the loop without sitting inside the dashboard all day.

Shared Learning Loop

  • Lessons learned ledger captures approved fixes and operating constraints.
  • Turns hard-won troubleshooting into reusable fleet behavior instead of tribal memory.

Start with the package, scale into deployment.

The open-source Flotilla package is the entry point. Big Bear Engineering is the upgrade path when you want the orchestration layer installed, tuned, and made operational for your team.

Open

Flotilla Package

Open-source starter kit - Free

  • Zero-install scaffolder: npx create-flotilla
  • Mission control, rules, standups, inbox, and lessons scaffolding
  • Kanban bridge and model-agnostic runtime conventions
  • PocketBase-ready always-on fleet structure
  • Documentation and GitHub source
DocumentationGitHub →
Premium

Fleet Deployment Intensive

One-day hands-on engagement - Contact us

  • Everything in Fleet Command
  • Full stack installation on your local hardware and private cloud
  • Workflow mapping that translates your engineering processes into agentic roles
  • Live execution until your fleet clears its first real-world tickets
  • Post-deployment support and tuning included
  • Additional support hours available separately
Add-On

Growth Expansion

Commercial extension for sales and marketing workflows.

  • Scout and Echo agent roles
  • Autonomous CRM lead pipeline
  • Content calendar integration

Common questions from technical buyers.

Instead of API keys with per-token billing, we configure agents to authenticate using OAuth with standard subscriptions (e.g. Claude Pro). This turns unpredictable costs into a flat monthly fee (~$20/agent). If one subscription hits limits or runs out of tokens, the fleet can keep moving with the other available models.
Engineering workflows are heterogeneous. Claude leads in logic, Gemini in context, Codex in speed. A fleet is more resilient when providers are model-agnostic.
Through a Vault-First, Zero-Footprint architecture using Infisical. Agents fetch secrets when needed directly into memory; no keys are ever stored on disk or in .env files.
PocketBase stores the live operational state for always-on fleets: tasks, comments, heartbeats, and lesson records. It gives both humans and agents a shared state backend instead of relying on hidden chat history.
The Dispatcher is the lightweight orchestration loop that watches task state, routes work to the right agent, and escalates waiting-human events. On macOS, it is designed to run continuously under launchd.
Open Claw is an optional always-on control layer for operators who want their fleet reachable outside the dashboard. Paired with Flotilla, it adds a practical path for long-running supervision and command relay.
The Telegram listener turns your phone into a command-and-control surface. It can relay alerts, approvals, and replies so you can manage the fleet without being pinned to the browser.
A GitHub account for orchestration, at least one Pro agent license, and an Infisical account for secure secret management.
We recommend Mistral and Apertus for teams prioritizing data sovereignty. These run on your own hardware with zero data sent to third parties.
A live view of agent status, a Kanban view of work in progress, project-specific memory trees, evolutionary lessons learned, and automated daily standups for audit trails.
It is a one-day hands-on engagement. We don't consider it finished until your fleet clears its first real-world production tickets on your own infrastructure.
Yes. Commercial plans include monthly support hours and continuous platform updates to keep your fleet running on the latest protocols.
Yes. The Fleet Hub is designed to manage several engineering projects simultaneously, each with its own context, agents, and Kanban boards.
GitHub is the center of engineering. We leverage it for Kanban ticketing, source control of rules, Shared Memory storage via MD files, and coordination across multiple development teams working in parallel.