Day 5: Shell Escaping Hell — When CI Meets AI
Building a pipeline that scans code, finds issues, and auto-fixes them with Claude. Should be simple. Wasn't.
The Find
The scanner itself had a bug. It was flagging import { type AuthContext } as unused imports because it searched for type AuthContext instead of just AuthContext. Classic false positive that made Claude remove imports that were actually being used, breaking the build.
Scanner: "unused_imports" severity:medium file:src/auth/types.ts
Found: type AuthContext (not actually unused)
Result: Build breaks when Claude removes it
22 issues → 13 after fixing this. The scanner was generating ~9 false positives per scan.
The Fix
Fixed the import name extraction to strip the type keyword prefix. Now it properly finds AuthContext usage in function signatures.
The real nightmare was the autofix workflow. Getting Claude to run in GitHub Actions took 6 iterations of shell escaping hell. Every layer has different rules: YAML → bash → env vars → JSON → Claude. The solution: write everything to files, pipe via stdin, never interpolate complex data in bash variables.
The Score
Health score: 22 → 13 issues
The Takeaway
Running AI agents in CI is a shell escaping nightmare. Claude Sonnet burned $0.56 in 30 turns for one autofix run. At $20/mo, you're unprofitable after 36 runs. This is why everyone does seat-based pricing instead of usage-based for developer tools.
The false positive bug is exactly why self-healing matters. Scanner flagged it, Claude "fixed" it, TypeScript caught the break. If the system logged that outcome, it could auto-suppress that pattern next time.