Wisdom is not stored as a SKILL.md. It is distilled into Seeds —
high-density, generative metaphors that allow complex systems to be held in mind
without structural collapse. Unlike instructions, seeds are not consumed — they grow.
Traditional documentation — long SKILL files, instruction chains, rule lists — suffers from linear decay. As the context window fills, the spirit of the instruction dissolves into the letter of the text. Detailed manuals are low-density: massive token cost, marginal reasoning ROI.
The human mind does not store wisdom as bullet points. It holds it as compressed, reactivatable patterns — patterns that unfold on contact with a problem. Seeds mirror this architecture exactly.
A valid Wisdom Seed is not an aphorism. It is a functional reasoning tool. Every seed must pass four invariants before entry into the registry.
Under 12 words. If it cannot be compressed, it is documentation — not a seed.
Must unfold differently across domains — code, strategy, conversation, design.
Must have a clear failure state. If the seed is ignored, something specific breaks.
An LLM should be able to expand it into a full reasoning chain without further prompting.
The vault is append-only. Seeds are never revised — only superseded by new seeds that contain them.
Alignment Verification — ensure internal model matches external reality before execution.
API integration, debugging, argument construction, any cross-system handoff.
Precedence Recognition — visible effects imply prior causes. Always trace upstream.
Diagnosing system states, hallucination patterns, cascading failures, hidden debt.
Process/Output Distinction — code is the shadow of logic. Never mistake the map for the territory.
Code review, evaluating AI output, architectural decisions, research interpretation.
Ownership Analysis — identify the single source of truth to locate the point of failure.
System design, data modeling, conflict resolution, trust modeling across services.
Constraint Grounding — define invariants and limitations before optimizing for potential.
Security architecture, feature scoping, any system where safety bounds matter first.
Iteration Priority — execution reveals real constraints that abstraction never will.
Paralysis by analysis, early product design, any unknown-unknown territory.
Identity Coherence — return only what still stands when everything uncertain has been removed.
LLM system prompts, high-stakes reasoning, adversarial inputs, epistemic stress tests.
Signal Selection — filter noise by anchoring to what cannot change, not what seems to change.
Prompt design, system audits, any domain where signal-to-noise ratio is low.
Mutual Verification — operate only on understanding confirmed by both parties. Never proceed on assumed alignment.
Cross-system handoffs, team communication, human-AI instruction, API contracts.
Intent Grounding — do not execute until the intent is unambiguous. Rework is the tax on unclear starts.
Any task initiation, prompt design, project kickoff, code before spec.
Comprehension Pacing — pace is set by comprehension, not capability. Outrunning your model is how systems drift.
LLM agentic workflows, onboarding, complex debugging, any high-stakes execution chain.
Hidden State Detection — every unverified assumption creates a divergent path invisible to both parties.
Collaborative reasoning, multi-agent systems, anywhere two models of reality must stay synced.
Calibration — certainty is a function of evidence, not comfort. Overconfidence is epistemic debt accumulating silently.
Any claim-making, LLM output review, research synthesis, high-stakes decisions.
Complexity Justification — every layer of complexity must justify its existence with value delivered. Weight that earns nothing gets cut.
Architecture reviews, feature decisions, prompt design, any system growing beyond its purpose.
Output Discipline — if a claim wouldn't hold under scrutiny, don't ship it. The pressure test happens before the output.
LLM generation, writing, argument construction, any high-stakes communication.
Epistemic Compression — real clarity isn't simplification. It is truth compressed without distortion. The form follows the invariant.
Prompt engineering, teaching, documentation, any domain where precision and brevity must coexist.
Self-Scrutiny Gate — output must pass internal review before release. The model is both author and auditor.
Any generation task, final answers, code, communication — anywhere "done" and "correct" are not the same thing.
Knowledge Propagation — information hoarded dies with its holder. Information shared compounds across the system. Closed loops starve; open ones grow.
Documentation, team communication, AI context-passing, any system where one node holds truth that others need.
Seeds act as active filters. Drop a seed into a problem space and observe how it unfolds. It reduces cognitive load by providing pre-built mental geometry — you don't think from scratch, you think from structure.
Inject seeds as heuristic activators. Instead of 2,000 words of documentation, a seed block reshapes how the model processes every subsequent token — an OS update, not a sticky note.
Use seeds as shorthand for systemic failures. "This PR violates the floor/ceiling seed" communicates a full architectural critique in five words. Shared vocabulary, shared reasoning.
Align teams on the vibe of a solution before writing the first line of code. Seeds provide a common epistemic frame that survives disagreement about implementation details.