KBS: Nesting and Ideation
Today's changes follow on from the previous post's "next steps": nesting and ideation.
Nesting
I worked with the agent to find code with deep nesting, decide how to flatten it, and generalize the approach into suggestions for a nesting principle. We then applied that principle across the codebase, and I think we were successful as the diff in the refactor commit is noticeably flatter.
4. Nesting Depth
Keep expressions nested at most two or three levels deep. Deeply nested code is hard to follow because the reader must hold every enclosing context in their head at once. When nesting grows, treat it as a signal that the code should be restructured.
Approaches for reducing nesting:
- Use monadic result operators (
let*,let+). ...- Extract resource-management wrappers. ...
- Consolidate error mapping into named functions. ...
- Factor repeated control-flow shapes into helpers. ...
Ideation
To start I had the agent analyze what we'd already built, as well as the mini roadmap in the README to produce an initial draft. I worked through the sections of the doc adding additional detail, as well as working through a number of open questions it left for me. The resulting document captures a lot of detail in the form of use cases, and a good summary of the project:
Knowledge Bases (bs) is a CLI tool for creating and managing structured knowledge bases suited for coding agents. It maintains a dual-format store — a SQLite database for fast runtime queries, and a JSONL text file for git-friendly diffing and merging — at the root of a git repository, so that the knowledge base travels with the code and can be distributed, versioned, and merged through ordinary git workflows.
The core problem: coding agents working on a codebase need a place to externalize their work — TODOs they've identified, research they've conducted, decisions they've made — outside of their limited context window. bs provides that externalized memory as a lightweight, structured store that lives alongside the code.
The document still has a couple of open questions because I couldn't be bothered to think about answers for them. Making the JSONL file a snapshot versus a journal or figuring out when to flush the SQLite db to the JSONL file are the kinds of things I would want to see where the code is at before making a decision about. So I am doing the same thing here and leaving them as open questions.
Overall, I think what we've produced should provide a good test for whether agents can effectively peel off self-contained features from a larger, mostly ideas and requirements document.