Keen Code is a terminal-based AI coding agent like Claude Code or OpenCode. Written in Go, it is simpler, lighter, and avoids feature bloat. It is designed to be a minimalistic but useful coding agent for typical software engineering tasks.
Keen Code is also highly opinionated. It avoids features that are not necessarily needed or useful for a regular software engineer. It tries to avoid unnecessary complexity and attempts to keep the agent harness as simple as possible.
Keen also has higher trust in the models. This is why Keen doesn't have a plan mode because SOTA models are very capable of planning tasks without attempting to make edits to the codebase. These models have gone through rigorous post-training which includes effective planning and execution capabilities.
From requirements to implementation, Keen Code was engineered using a wide range of coding agents and agentic IDEs like Cursor, Windsurf, Claude Code, OpenCode, Codex CLI, and Kimi CLI. Note that it was always a single agent that was used to develop the project at any given time. No multi-agent orchestration was used.
By far, AI coding agents are the most ubiquitous use case for AI in the era of AI agents. The goal of the project is to showcase how coding agents can be used to develop the coding agents themselves. This is why most prompts are saved as markdown files in the .ai-interactions directory.
Keen Code is an experiment to play with the new way of working where engineers work with AI agents to develop software. In this setting, engineers are sometimes referred to as "orchestrators".
- Development Philosophy
- Development Cycle Example
- Install Keen Code
- Run Keen
- Supported Providers
- Built-in Tools
- Context Handling
Developing Keen Code is guided by the following philosophy:
- All the code is written by AI agents, not humans
- The project is developed iteratively using spec-task-code-review cycle by a human engineer
- The human engineer has a very strict set of roles:
- Specifiy and clarify the requirements
- Review design docs and influence design decisions
- Review changes made by the agents
- Changes can also be reviewed by the agents themselves
- Ensure the quality and correctness of the code
- Focus on best practices and standards relevant to the programing language (Go in this case)
- Thoroughly review and test the product after each iteration
- Continously provide feedback to the agents to improve the product
- Prompts are saved as markdown files in the
.ai-interactions/promptsdirectory- Almost all of the prompts are stored to showcase how the project evolved from the initial requirements to the current state
- Prompts are pretty much chronologically ordered which demonstrates the thought process and iterative nature of the development
- All the outputs are saved as markdown files in the
.ai-interactions/outputsdirectory- These outputs are basically plans, design docs, and breakdowns of the tasks
- These outputs are the "specs" that the agents later use to implement the tasks
All features follow a spec → plan → task → review cycle. Here's a concrete example — the read_file tool from Phase 3:
Spec — prompts/phase-3/prompt-3_read-file-tool.md
Requirements defined upfront: ask permission before reading, respect FileGuard path rules, text files only, 1 MB limit, support relative and absolute paths.
Plan — outputs/phase-3/output-3_read-file-tool.md
Design doc produced by the agent: how Guard.CheckPath maps to the REPL permission prompt, exact struct contracts, permission flow diagram.
Task — prompts/phase-3/prompt-5_phase-3-tasks.md
Implementation broken into steps — tool contract, permission bridge, REPL selector, unit tests — each approved before the next began.
Review — (inline feedback during implementation)
The LLM was rejecting .go files because MIME detection flagged them as binary. Review caught this; switched to character-based text validation. The fix landed in the same iteration.
curl -fsSL https://raw.githubusercontent.com/mochow13/keen-code/main/scripts/install.sh | bashTo pin a specific version:
curl -fsSL https://raw.githubusercontent.com/mochow13/keen-code/main/scripts/install.sh | bash -s -- -v v0.1.4Installs to /usr/local/bin if writable, otherwise $HOME/.local/bin.
Install the CLI globally:
npm install -g keen-codeCheck that the install worked:
keen --version
which keenYou can also run it without a global install:
npx keen-code --versionStart Keen in your current directory:
keen- Anthropic
- OpenAI
- Codex (ChatGPT OAuth)
- Google AI (Gemini)
- Moonshot AI (Kimi)
- DeepSeek
- Z.ai (GLM)
- MiniMax
- OpenCode Go
Use
/modelto switch providers. The ChatGPT/Codex option opens a browser-based OpenAI sign-in flow and stores OAuth credentials in~/.keen/auth.json.
MiniMax uses its Anthropic-compatible API and includes MiniMax M2.7 and M2.5. OpenCode Go uses an API key and includes GLM, Kimi, DeepSeek, MiMo, MiniMax, and Qwen models.
Keen Code aims to support minimal set of useful tools for coding. Currently, these tools are built in:
read_file— read a UTF-8 text fileglob— find files by glob patternsgrep— search for text patterns in fileswrite_file— create or overwrite filesedit_file— replace specific text in existing filesbash— run shell commands
Keen takes a deliberately lean approach to cross-turn context. Within a single assistant turn the model has full access to its tool calls and results, but once the turn completes Keen does not carry the raw tool trace forward. Instead, it distills a compact TurnMemory summary that records only the outcomes most likely to matter later — currently which files were changed and which bash commands failed.
Subsequent turns therefore receive:
- prior user and assistant messages
- the compact
TurnMemorysummary from earlier turns - any pending state from a turn that failed mid-loop, so the model can resume instead of starting over
The tradeoff is intentional: smaller context and a better signal-to-noise ratio, at the cost of occasionally re-reading files or re-running searches when older observations are needed again. Read-only facts are cheap to recompute; mutated state and failures are what deserve durable memory.
For the full rationale, lifecycle, and comparison with other coding agents, see docs/turn-memory.md.

