I built Codifica to avoid losing context when coordinating multiple AI agents
I’ve been building with AI agents on some personal projects and ran into an issue that felt small at first, but kept coming back.
Working with a single agent was mostly fine.
Coordinating multiple agents quickly became annoying.
Not because of prompts or model quality, but because context kept getting lost as work moved between agents. Intent diluted, ownership got fuzzy, and I ended up being the source of truth by default.
I started experimenting with a small, opinionated protocol to make intent, state, and ownership explicit across humans and agents.
Codifica is not a product or a task manager.
It’s a lightweight communication standard built around a few simple ideas:
- humans express intent and judgment
- agents execute explicit contracts
- shared state lives in plain text
- Git becomes the audit log
It’s early and rough, but it’s already been useful for me when working with more than one agent.
Write-up and spec here:
Curious if others here have run into similar coordination or context-loss problems with multi-agent setups, and how you’ve approached them.
1 Comment