Governance
visionlog — Vision, goals, guardrails, and ADRs. The contracts all execution must honor. If a task would violate a guardrail, the answer is already no.
pip install visionlog-md Eidos AGI is an open-source community that builds governance infrastructure for AI agents. We make the tools that sit between "the model works" and "the model is safe to deploy." At the center is Eidos — a persistent agent with its own identity, memory, and judgment.
We started building these because we were deploying agents ourselves and realized nobody had built the governance layer. The models worked. The agents ran. But there was no system for what they were allowed to do, how decisions got made, or what happened after. So we built one.
Claude Code is extraordinary at writing code, debugging, and executing tasks. But it has no memory of what your project committed to. It doesn't know your guardrails. It can't tell you why a decision was made three sessions ago. Every session starts from zero.
There's no system to record guardrails, goals, or architectural decisions that persist across sessions. The agent improvises every time.
When the agent picks PostgreSQL over SQLite, that reasoning lives in conversation history — not in a reviewable, scored decision document.
Tasks aren't tracked. There's no Definition of Done, no milestone awareness, no way to pick up where a crashed session left off.
The trilogy fills these gaps. It gives Claude Code the governance memory it doesn't have.
# Agent decides on its own
result = agent.run("migrate the database schema")
# No guardrails checked
# No decision recorded
# No one knows what happened or why
deploy(result) # 1. Boot governance — get active guardrails
visionlog_boot project_id: "..."
# 2. Check: does this task violate any guardrail?
guardrail_inject project_id: "..."
# 3. Research earns the decision
candidate_create title: "blue-green"
candidate_create title: "rolling"
criteria_lock
project_decide winner: "blue-green"
# 4. Execute within the contract
task_create title: "migrate schema"
definition_of_done: ["tests pass", "rollback tested"]
task_complete notes: "deployed, verified" Agents need governance before they act, evidence before they decide, and tracking after they execute. Three tools enforce that contract.
visionlog — Vision, goals, guardrails, and ADRs. The contracts all execution must honor. If a task would violate a guardrail, the answer is already no.
pip install visionlog-md research.md — Evidence-graded, phase-gated, peer-reviewed. Decisions earned with data, not assumed in conversation.
pip install research-md ike.md — Tasks, milestones, Definition of Done. Named for Eisenhower.
pip install ike-md | visionlog | research.md | ike.md | |
|---|---|---|---|
| Governs | What the project committed to | How decisions get earned | What work gets done |
| When | Before any work starts | Before a consequential choice | During execution |
| Outputs | Vision, goals, guardrails, ADRs | Scored candidates, decision brief | Tasks, milestones, completion notes |
| Install | pip install visionlog-md | pip install research-md | pip install ike-md |
pip install visionlog-md research-md ike-md The whole trilogy. One line. Amazon didn't create the internet. It created the trust infrastructure — reviews, ratings, guaranteed delivery — that made people willing to buy from strangers online. We're doing the same thing for AI agents.
We don't build LLMs. We build the governance, decisions, and execution standards that make them trustworthy enough to hand real work.
If the tools that govern your agents are closed, you're trusting the vendor. If they're open, you're trusting the code. We think that's a better deal. The core is MIT-licensed and always will be.
We build enterprise products too — governance at scale, compliance, SLAs. But the foundation is public and auditable. Trust us because you read the source, not because we asked nicely.
Agents are getting deployed into real systems right now — if you don't give them governed tools, explicit decisions, and agent-grade interfaces, you're shipping a probabilistic coworker with production access.
The open-source tools handle governance for individual teams. When you need it across an organization — multi-team policy enforcement, compliance reporting, audit trails, SSO — that's what the enterprise layer will do.
Same tools. Same codebase. More surface area.
Talk to us on GitHubEverything is open source. Everything is MIT. If you see the gap — reach out.