lspforge depends on 11 repositories I don’t control. Anything in any of them can break my tool without warning. I needed a way to know before my users did.
Here’s exactly what I built, and what it cost.
The Problem
When you build a tool that integrates with other tools — language servers, AI coding tools, CLI utilities — you inherit all of their release schedules. A new Claude Code update. A rust-analyzer binary format change. A Copilot CLI config schema revision. Any of them can silently break your tool overnight.
The only alternative to monitoring is waiting for GitHub issues.
The Architecture
Every morning at 9AM, an n8n workflow checks the latest releases from all 11 repos lspforge depends on. Anything shipped in the last 3 days gets sent to Gemini 2.5 Flash with one question: does this break lspforge?
The prompt includes:
- The release notes and changelog from the upstream repo
- The relevant source files from lspforge that interact with it
- A structured output schema asking for:
status(GREEN/YELLOW/RED),affected_files,migration_steps,severity
GREEN: nothing to do. The Telegram channel gets an “All clear” message.
RED: Gemini cross-references the actual source code and creates a GitHub issue automatically — with severity, affected files, and migration steps already written.
Five minutes after the workflow runs, I check Telegram. Either “All clear” or a prioritized action list I can work through before lunch.
The Gotchas
Two things broke the first version, both non-obvious:
Gemini response schema naming: Gemini has two field names for structured output — responseSchema and responseJsonSchema. Send the wrong one and the API silently ignores it. No error, no 400 status code. It just returns an unstructured response. I debugged this by reading the raw API response and noticing the output wasn’t conforming to my schema.
Thinking token budget: Gemini 2.5 Flash’s “thinking” tokens count against your output token limit. On yes/no classification tasks, the model was burning ~980 tokens reasoning before writing its answer — then running out of budget before outputting actual JSON. Fix: thinkingBudget: 0 for classification tasks where you don’t need the reasoning trace.
I debugged both of these using Claude Code — an AI agent reading Gemini’s documentation, forming hypotheses, testing fixes against another AI’s API. The irony was not lost.
The Results
In the first two weeks, it caught 3 upstream breaking changes before any user reported them.
One was a Claude Code update where their LSP server initialization order changed, creating a race condition with my config. Without monitoring, I would have found out via a GitHub issue from a frustrated user. Instead, I had a fix ready before most users had updated.
The Cost
n8n self-hosted: $0
Gemini 2.5 Flash API: effectively $0 (at roughly 11 repo checks × ~2k tokens each = 22k tokens/day, the cost is below $0.01/day at current rates)
Total: $0/month.
What This Is and Isn’t
This isn’t a script that pings a server. It’s an agent that reads release notes, understands my codebase, and tells me what to fix — before users hit the break.
The distinction matters. A ping script tells you something changed. This tells you whether the change matters and what to do about it. That’s the difference between alerting and triage.
One person. Zero manual monitoring. 11 repos covered.
This is post 4 of 5 in the Agentic Development series about building lspforge. GitHub: svivekvarma/lspforge