Carryover: Portable state handoff for LLMs
I built a a portable work state CLI tool to easily switch between LLMs
ReadProduct builder at the intersection of LLMs, AI, and fintech. Focused on building intelligent systems that scale responsibly and safely.
I built a a portable work state CLI tool to easily switch between LLMs
ReadLet's take a local manifold walk to understand why small models are sometimes better, more resource responsible and won't burn a hole in your pocket.
ReadI'm running into rate limits every hour or so into using claude code. I analyzed one session: < 3% of my AI bill went to intelligence. The rest? Regurgitating stale and bloated context
ReadI was fixing a bug for my newborn's app and ChatGPT had a complete brain-body disconnect: It kept reasoning 'don't do the thing' → did it → 'oops I did this' → redo → 'oops I did it again'
ReadDemos are easy. Production is hard. If you want internal knowledge search that actually works, you're not 'adding RAG'—you're building a pipeline where the LLM is the easiest part.
ReadFrom Evals 101 to Production Grade systems.
ReadProduct lessons: atomic desires, two way doors, and future complexity backseat
ReadA command-line tool to extract Claude Code session data and generate portable handoff documents for use with ChatGPT and other LLMs. It parses session logs, generates structured markdown using local Ollama models, and includes automatic secret redaction for API keys and credentials. Source here.
Extractive text summarization in Python using classic NLP + graph ranking. It ended up powering multiple Reddit bots. Source here