Carryover: Portable state handoff for LLMs
I built a a portable work state CLI tool to easily switch between LLMs
ReadI built a a portable work state CLI tool to easily switch between LLMs
ReadLet's take a local manifold walk to understand why small models are sometimes better, more resource responsible and won't burn a hole in your pocket.
ReadI'm running into rate limits every hour or so into using claude code. I analyzed one session: < 3% of my AI bill went to intelligence. The rest? Regurgitating stale and bloated context
ReadI was fixing a bug for my newborn's app and ChatGPT had a complete brain-body disconnect: It kept reasoning 'don't do the thing' → did it → 'oops I did this' → redo → 'oops I did it again'
ReadDemos are easy. Production is hard. If you want internal knowledge search that actually works, you're not 'adding RAG'—you're building a pipeline where the LLM is the easiest part.
ReadFrom Evals 101 to Production Grade systems.
ReadProduct lessons: atomic desires, two way doors, and future complexity backseat
Read