I'VE FOCUSED ON LEARNING HOW TO USE AI
TO EXTEND MY CREATIVITY.
Pick a prompt. Hit Send. See what Claude returns when it already knows your projects, your decisions, and how you work.
[Client A]'s last sprint is ready to bill, €X,XXX from your day-rate formula, saved as [Client A]-2026-04.docx. [Client B]'s invoice #47 is 18 days overdue, chaser drafted in your Gmail drafts, your usual tone. Your accountant's Q4 ask: Books.xlsx is filled.
One thing I didn't touch: the [Client C] contract renewal. Scope changed, that's your read before I draft.
Every demo above is Claude Code running inside four user-authored layers. Miss one and the system collapses back to stateless chat. Every business needs all four, tuned to how that business actually works.
A plain-text file Claude reads on every tool call. Principles, routing, what to file where, what to check first. It's where the durable shape of your work lives. It scopes from global to project, and when a topic grows past a few lines it breaks out into its own file under .claude/rules/ — same always-on behavior, one chapter per file, path-scoped so nothing clutters unrelated work.
Applies to every session on your machine. Personal style, safety defaults, cross-project preferences.
Never use --no-verify to skip hooks. Ask before destructive ops (rm -rf, force push, dependency removal). Default to the naive option. No em dashes in prose or commits.
Applies inside a specific project. Tech stack, conventions, where narrative vs implementation live.
Tailwind only. No custom CSS files. Copy lives in Cortex. Don't rewrite pitch copy here; edit the spec. Budget: 300KB gzipped JS.
// from pushback.md Verdict first, then reasoning. "That won't scale because Y." Not "Interesting, but have you considered..."
Consequence: you stop re-explaining your standards every session. The next one starts already knowing them. Consistent behavior across projects, across topic shifts, across a year of work.
Reusable workflows in .claude/skills/. Each has a "Use when X" description. Claude matches your prompt against those descriptions and loads only what fires. Roughly an order of magnitude cheaper than equivalent MCPs because nothing runs until it triggers.
Consequence: procedures stop needing re-derivation. The right steps run the right way, every time the trigger fires.
External systems Claude calls through. Each MCP server exposes typed tools Claude invokes inside the conversation. Gmail to draft mail. Calendar to schedule. Obsidian to query your vault. Chrome DevTools to drive a real browser.
// actual call Claude makes, mid-conversation
search_vault({
queryType: "dataview",
query: "TABLE status FROM 'Projects'
WHERE type = 'task' AND priority = 'A'
AND !completed"
}) Consequence: Claude acts on your systems, not just talks about them. The operation is the conversation.
Sub-agents with isolated context. Main Claude delegates; results come back without polluting the primary conversation. Each agent has its own tool whitelist and its own model tier. You can spawn several in parallel.
Consequence: parallelizable work runs parallel. Long searches don't burn your context budget. Fresh-head reviews actually come from a fresh head.
One morning request. "What's my day?" Here's what fires, in order, without you naming any of it.
CLAUDE.md is already loaded. "Morning briefing" matches the session-start skill's trigger description.
The skill fires. It queries the Obsidian MCP for the Currently sections of each active project and the Gmail MCP for unread threads.
A rule from .claude/rules/journaling.md arms path-scoped: any file modification in this session owes a journal entry. A bulk-mechanical agent spawns in parallel to summarize three long Gmail threads without touching the main context.
Reply drafted. One new decision note logged. One journal entry started.
Structured knowledge is the byproduct. Every prompt leaves the vault a little more organized than it found it. A decision filed. A task opened. A journal landed. That output is what the next session reads. The system compounds while you work.
I didn't open anything to get those answers. I asked once and the system went and looked.
Every session leaves the structure a little more organized than it found it. A decision gets written down. A note gets linked. An old assumption gets corrected.
The next session starts from there. More to read, sharper answers, more worth writing down.
Six months in, both me and Claude have access to [N] cross-session decisions, [N] journal entries, [N] typed knowledge notes, [N] skills, all created in the course of working, none of it filed by hand.
The knowledge didn't decay. It compounded.
You could build this. Everyone who sees it can see how it works.
But you'd spend a year figuring out what to keep. I already did the year. The audit is me handing you only the parts that matter.
You don't have to build this yourself. I've already built mine. I can build yours to fit your work.