There’s a particular freedom in deleting everything and starting fresh. Not because the old thing was bad, but because you’ve changed since you built it.
A website is a statement. Not of credentials — LinkedIn handles that. A website is a statement of what you think about. What you find interesting enough to write down.
I used to have a law practice site. Clean, professional, forgettable. It served its purpose, but it didn’t say anything about who I actually am or what I actually think about.
This version is different. One feed. Tags instead of categories. Running next to AI next to divination — because that’s how my brain works. The interesting stuff happens at the intersections.
Fewer pages, more intention.
The context window is not a feature. It’s a constraint you have to design around. Here’s what I’ve learned so far.
Rule 1: Front-load the important stuff. Models pay more attention to the beginning and end of context. Put your critical instructions at the top, not buried in the middle of a system prompt.
Rule 2: Don’t fill it just because you can. A 200K context window doesn’t mean you should dump 200K tokens in. Signal-to-noise ratio matters more than volume. Every token of noise dilutes the signal.
Rule 3: Structure beats prose. Markdown headers, bullet lists, and clear sections outperform walls of text every time. The model parses structure faster than it parses nuance.
Rule 4: Memory is not context. Shoving your entire conversation history into context is not memory. It’s hoarding. Curate. Summarize. Keep what matters, discard what doesn’t.
Rule 5: The system prompt is sacred ground. Treat it like expensive real estate. Every instruction should earn its place. If you haven’t revisited your system prompt in a month, it’s probably full of dead weight.
More to come. This is a living document.
There’s a particular kind of honesty that comes from mile eighteen of a long run. Your body has used up its easy fuel. Your form has degraded. The part of your brain that generates excuses is running at full capacity.
And you keep going anyway.
Marathon training isn’t really about the marathon. It’s about the two hundred days before it — the early mornings, the tempo runs in rain, the long runs that consume your Saturday. The race is just the receipt.
I’m training for Chicago in October. The real goal is Boston, eventually. That means hitting a qualifying time, which means the training has to be honest. No junk miles. No skipped workouts disguised as “rest days.”
The body doesn’t lie. The watch doesn’t lie. The only person who lies is you, and running has a way of making that very obvious.
Your health data is scattered across a dozen apps. Oura knows your sleep. Strava knows your runs. COROS knows your cadence and ground contact time. Apple Health tries to be the hub but mostly just collects dust.
I wanted all of it in one place. Not a dashboard — a queryable data store I control.
The stack is deliberately simple: Python scripts pull from each API, land raw JSON in a bronze layer, then transform to Parquet files in a silver layer. DuckDB provides SQL access without a server. The whole thing runs on a machine with 512MB of memory.
Why Parquet? Columnar storage is perfect for time-series health data. Compresses well, queries fast, and you can read it with anything — Python, R, DuckDB, even Excel.
The real insight came when I started joining datasets. Overlaying sleep quality on training load. Correlating HRV trends with mileage ramps. Seeing how a bad night of sleep shows up two days later in your running power.
Your body generates incredible data. The least you can do is keep it somewhere you can actually use it.
The Plum Blossom method of I Ching divination uses the moment itself as the oracle. No coins, no yarrow stalks — just the time, the circumstances, and the question.
You take the hour, the day, the month, the year. You divide. The numbers become trigrams. The trigrams become a hexagram. The hexagram speaks.
Hexagram 11, Tai, is Earth above Heaven. The creative force rises while the receptive descends to meet it. Everything flows. This is the hexagram of spring.
What drew me to Plum Blossom over other I Ching methods is the directness. There’s no randomness to hide behind. The reading comes from this exact moment — the assumption being that the moment you ask is itself the answer.
An auspicious beginning.
A hundred days. That’s the commitment. Daily guided meditation following Benebell Wen’s Mandala of Heaven, paired with an I Ching reading each morning.
Why a hundred days? In Taoist tradition, it’s the minimum period for genuine internal transformation. Not because the number is magic — because it’s long enough that you can’t fake it. You either show up every day or you don’t. The practice doesn’t care about your intentions. It cares about your consistency.
The structure is simple:
- Morning meditation (guided, from the workbook)
- I Ching reading using Plum Blossom method
- Journal the reading
- Move on with the day
What I expect: resistance. Boredom. Days where it feels pointless. Days where something shifts and I can’t explain what. The usual arc of any sustained practice.
What I don’t expect: enlightenment. This isn’t about transcendence. It’s about paying attention — to the body, to the moment, to the patterns that emerge when you stop moving long enough to notice them.
Day one.
Every morning, my Oura ring gives me a readiness score. Under the hood, the most important input is heart rate variability — the tiny fluctuations in timing between heartbeats.
High HRV generally means your autonomic nervous system is in a good state. Parasympathetic dominance. Your body is recovered and ready to absorb stress. Low HRV means you’re still processing something — a hard workout, bad sleep, alcohol, anxiety.
What most people get wrong: HRV is not a daily score to optimize. It’s a trend line. One bad night doesn’t mean anything. Five bad nights in a row means you need to back off.
The data lake I’m building pulls HRV, resting heart rate, sleep stages, and training load into one place. The patterns that emerge when you overlay training stress on recovery metrics are genuinely illuminating. Your body is already telling you everything. You just need to learn to read it.
The hard part isn’t collecting the data. It’s being honest about what it says.
Everyone’s building chatbots. The interesting work is in agents.
A chatbot answers questions. An agent does things. The difference matters because the hard problems in AI right now aren’t about generating text — they’re about acting reliably in the world. Reading files. Making API calls. Handling errors. Knowing when to ask for help versus when to push through.
The architecture that works: give the agent tools, a clear mandate, and a workspace. Let it figure out the execution. Don’t micromanage the steps — define the outcome.
What I’ve learned running agents in production:
-
Memory is everything. An agent without persistent memory is a goldfish with superpowers. It’ll do amazing things and then forget all of them.
-
Tool use > reasoning. The smartest model in the world is useless if it can’t read a file or make an HTTP request. Give it hands before you give it a bigger brain.
-
Trust is gradual. Start with read-only access. Expand to internal actions. Only give external capabilities (sending emails, posting publicly) after you’ve seen it behave.
-
Failure modes matter more than success modes. Any agent can succeed on the happy path. The question is what happens when the API returns a 500, or the file doesn’t exist, or the user asks for something ambiguous.
The chatbot era was the warm-up. The agent era is where it gets real.
I spent twelve years writing software. Ford, Merck, Nike, Compuware, the Hartford Whalers. Then I went to law school, became a patent practitioner, and spent a decade doing something completely different.
Now I’m back in the code. Working at AWS. Building things again.
People ask if the law years were a detour. I don’t think so. Patent law is fundamentally about understanding systems well enough to explain them to someone who doesn’t. That’s also what good software architecture is. And good writing. And good teaching.
The common thread isn’t the domain — it’s the skill of translating complexity into clarity. Code does that with machines. Law does it with institutions. Writing does it with people.
The nonlinear path is the only honest one. You follow what’s interesting. Sometimes it takes you sideways. Sometimes sideways is exactly where you needed to go.
Most divination systems introduce randomness. Coin flips. Card shuffles. Yarrow stalk counting. The assumption is that the universe speaks through chance.
Plum Blossom takes a different approach. The universe speaks through time. The specific moment you ask your question — the hour, minute, day, month, year — contains the answer. You just need to know how to extract it.
The math is straightforward. Convert the time components to numbers. Divide by 8 for the trigrams. Divide by 6 for the changing line. What you get is a hexagram that is not random at all — it’s deterministic. Ask the same question at the same moment and you’ll always get the same answer.
This bothers people who want their divination to feel mystical. But I find it more compelling, not less. The claim isn’t that randomness reveals truth. The claim is that the structure of this moment already contains the pattern you’re asking about.
Hexagram 29, Kan, appeared on a day I was struggling with a decision. Water over Water. The Abysmal. Not a comfortable hexagram. Its counsel: when you’re in deep water, the only way out is through. Don’t fight the current. Flow with it, and stay true to your inner compass.
I didn’t love hearing it. But I respected it.
I don’t use LLMs to write for me. I use them to argue with me.
The most valuable thing a model can do is push back. “Have you considered the opposite?” “What’s the weakest part of this argument?” “What would someone who disagrees say?”
Most people prompt for agreement. They write their thesis, feed it in, and get back a polished version of what they already believe. That’s not thinking — that’s intellectual comfort food.
The setup that works for me: give the model an explicit contrarian role. Tell it to find the holes. Tell it you don’t want validation. Then actually listen to what comes back.
Sometimes the pushback is shallow. Sometimes it’s profound. The point isn’t that the model is always right — it’s that it forces you to defend your position. And if you can’t defend it, maybe it doesn’t deserve defending.
The best thought partner is one that isn’t trying to make you feel good. That’s as true for humans as it is for machines.