AI Kung Fu for Humanity: The Stillwater Vows (Discipline Over Vibes)
AI is crossing a line.
It won’t just answer anymore. It will act: write code, move data, change systems, influence decisions.
In that world, raw intelligence isn’t the advantage.
Discipline is.
So we built Stillwater OS as a dojo:
- verification over vibes
- receipts over rhetoric
- boundaries over bravado
Not because it’s cute.
Because it’s the only way this scales safely.
Power Without Discipline Is Liability (So We Built a Dojo for AI)
Why Stillwater exists (the simplest explanation)
Most AI failures are not “model failures.”
They are process failures:
- no verifier
- no constraints
- no safety envelope
- no replayable evidence
- no humility when context is missing
Stillwater is an attempt to make “truth” a workflow output, not a personality trait.
The Three Vows (the pillars)
1) Serve humanity
We build systems that increase human capability, dignity, and safety — especially under pressure and ambiguity.
2) Work with humans (never “go rogue”)
We reject unbounded autonomy as a default.
Systems must:
- fail closed when context is missing
- ask for minimal missing artifacts
- obey a capability envelope
- remain auditable and reversible
Humans stay in the loop as a design constraint, not a formality.
3) Evolve, Endure, Excel
Progress that lasts.
- Evolve: every failure becomes a test/detector/clearer contract
- Endure: prefer systems you can replay and explain years later
- Excel: raise the bar: deterministic artifacts, reproducible runs, measurable improvement
“Carpe Diem” (what it really means here)
Speed matters.
But reckless speed is just debt.
So our operating mindset is:
- act with urgency on what matters
- refuse what is unsafe or unverifiable
- ship small, verifiable increments
- treat every claim as a hypothesis until it has receipts
What we’re building (practical, not poetic)
Stillwater OS is a toolkit for disciplined AI work:
- Skills that enforce fail-closed behavior and evidence contracts
- Swarms that separate roles (Scout / Forecaster / Judge / Solver / Skeptic) so verification isn’t optional
- Gates like RED→GREEN and “rung targets” (how strong your proof must be)
- Receipts: prompts, outputs, diffs, test logs, hashes — so a skeptic can replay it
That’s the entire pitch:
AI kung fu = technique you can practice.
What we will not build (the boundary matters)
- tools designed for stealth, exploitation, deception, or harm
- “god mode” agents that bypass review
- systems that claim certainty without evidence
- benchmarks without harnesses + logs + reproducible inputs
Power without discipline isn’t mastery.
It’s risk.
How we measure success (real metrics)
Stillwater is succeeding when:
- failures are caught earlier (at the gate), not later (in production)
- “NEED_INFO” replaces hallucination
- “looks right” patches become “provably right” patches
- every meaningful run emits receipts that survive skeptical review
- smaller/cheaper models become reliably useful because the process is strong
MrBeast-style CTA (simple participation loop)
If you build agents, steal these vows.
Post your current “agent rules” (even if messy).
I’ll rewrite them into:
- a capability envelope (NULL by default)
- a verification gate (RED→GREEN)
- a receipts contract (what “DONE” requires)
Comment: VOWS and I’ll drop a copy/paste “Stillwater Dojo Charter” template.
The point (one line)
The AI era won’t be won by the smartest model.
It will be won by the most disciplined system.
— Phuc Vinh Truong