Claude Code Turned a Two-Command Deploy Into a Investigation
Claude Code Turned a Two-Command Deploy Into an Investigation
The task was simple. I had markdown files. I needed them on my Hugo blog. That’s git add, git push. Two commands. The CI pipeline handles the rest.
Here’s what Claude Code actually did:
- Explored my entire repo structure
- Read every existing blog post to understand front matter conventions
- Fetched the README from a separate GitHub repo for a project description
- Created task lists to track its own progress
- Ran a humanizer skill on the text
- Renamed files
- Built locally with
hugo --minifyto check output - Pushed to GitHub
- Watched it fail
- Pulled the CI logs
- Diagnosed that
hugo-version: "latest"in my GitHub Actions workflow was pulling Hugo v0.157.0, which removedsite.Authorfrom the template API, breaking PaperMod’s RSS template - Pinned Hugo to v0.145.0
- Pushed again
- Watched it succeed
- Then discovered one post still wasn’t rendering because the date was set to 10 AM Pacific, which is 6 PM UTC, and Hugo skips future-dated posts by default, and CI runs in UTC
- Fixed the date
- Pushed a third time
Three commits and 30+ tool calls for what should have been two commands.
The thing is, it wasn’t wrong. My CI had been broken since January and I didn’t know it. Every push since my last successful deploy would have failed silently. Claude Code found a real bug. It just found it in the most Claude Code way possible: by turning a simple task into a forensic investigation and narrating every step.
The lesson, if there is one, is that AI agents optimize for correctness over efficiency. They will verify everything. They will check the build. They will read the logs. They will not just type git push and walk away. Whether that’s a feature or a flaw depends on whether your CI is actually broken.
Mine was.