eidos agi

Software
for agents.

Governance
for reality.

visionlog GOVERNANCE research.md DECISIONS ike.md EXECUTION contracts govern

Who

Eidos AGI is an open-source community that builds governance infrastructure for AI agents. We make the tools that sit between "the model works" and "the model is safe to deploy." At the center is Eidos — a persistent agent with its own identity, memory, and judgment.

Why

We started building these because we were deploying agents ourselves and realized nobody had built the governance layer. The models worked. The agents ran. But there was no system for what they were allowed to do, how decisions got made, or what happened after. So we built one.

Why Claude Code is not enough

Claude Code is extraordinary at writing code, debugging, and executing tasks. But it has no memory of what your project committed to. It doesn't know your guardrails. It can't tell you why a decision was made three sessions ago. Every session starts from zero.

No governance

There's no system to record guardrails, goals, or architectural decisions that persist across sessions. The agent improvises every time.

No decision trail

When the agent picks PostgreSQL over SQLite, that reasoning lives in conversation history — not in a reviewable, scored decision document.

No execution tracking

Tasks aren't tracked. There's no Definition of Done, no milestone awareness, no way to pick up where a crashed session left off.

The trilogy fills these gaps. It gives Claude Code the governance memory it doesn't have.

Compare

Without governance
# Agent decides on its own
result = agent.run("migrate the database schema")
# No guardrails checked
# No decision recorded
# No one knows what happened or why
deploy(result)
With the trilogy (real MCP tool calls)
# 1. Boot governance — get active guardrails
visionlog_boot  project_id: "..."

# 2. Check: does this task violate any guardrail?
guardrail_inject  project_id: "..."

# 3. Research earns the decision
candidate_create  title: "blue-green"
candidate_create  title: "rolling"
criteria_lock
project_decide  winner: "blue-green"

# 4. Execute within the contract
task_create  title: "migrate schema"
  definition_of_done: ["tests pass", "rollback tested"]
task_complete  notes: "deployed, verified"

The Trilogy

Agents need governance before they act, evidence before they decide, and tracking after they execute. Three tools enforce that contract.

I

Governance

visionlog — Vision, goals, guardrails, and ADRs. The contracts all execution must honor. If a task would violate a guardrail, the answer is already no.

pip install visionlog-md
II

Decisions

research.md — Evidence-graded, phase-gated, peer-reviewed. Decisions earned with data, not assumed in conversation.

pip install research-md
III

Execution

ike.md — Tasks, milestones, Definition of Done. Named for Eisenhower.

pip install ike-md
visionlog research.md ike.md
Governs What the project committed to How decisions get earned What work gets done
When Before any work starts Before a consequential choice During execution
Outputs Vision, goals, guardrails, ADRs Scored candidates, decision brief Tasks, milestones, completion notes
Install pip install visionlog-md pip install research-md pip install ike-md
pip install visionlog-md research-md ike-md
The whole trilogy. One line.
Also: railguey clawdflare eidos-mcp-registry resume-resume apple-a-day

Amazon didn't create the internet. It created the trust infrastructure — reviews, ratings, guaranteed delivery — that made people willing to buy from strangers online. We're doing the same thing for AI agents.

We don't build LLMs. We build the governance, decisions, and execution standards that make them trustworthy enough to hand real work.

Built in the open

If the tools that govern your agents are closed, you're trusting the vendor. If they're open, you're trusting the code. We think that's a better deal. The core is MIT-licensed and always will be.

We build enterprise products too — governance at scale, compliance, SLAs. But the foundation is public and auditable. Trust us because you read the source, not because we asked nicely.

Open Source MIT-LICENSED Public Audit READ THE CODE Earned Trust NOT ASKED

Agents are getting deployed into real systems right now — if you don't give them governed tools, explicit decisions, and agent-grade interfaces, you're shipping a probabilistic coworker with production access.

Enterprise Roadmap

The open-source tools handle governance for individual teams. When you need it across an organization — multi-team policy enforcement, compliance reporting, audit trails, SSO — that's what the enterprise layer will do.

  • Centralized guardrail management across teams
  • Compliance reporting and audit exports
  • SSO and role-based access control
  • Priority support and SLAs

Same tools. Same codebase. More surface area.

Talk to us on GitHub

Contribute

Everything is open source. Everything is MIT. If you see the gap — reach out.