"The Cost of Wrong Attribution: MCP Browser Diagnosis Record"
When tools fail, verify prerequisites first, then question tool capabilities
Author: Blue + Claude
Updated: 2026-01-09
Based on: feat-feedback-loop feature verification
This is a classic case of "misdiagnosis". We spent considerable time proving a wrong conclusion, until the user woke us up with one sentence. Recording this for those who come after.
Background
While verifying the feedback feature, I used the MCP browser tool to access http://localhost:9002, and got this error:
net::ERR_CONNECTION_REFUSED at http://localhost:9002
The error seemed clear. My first reaction was: MCP browser runs in a remote environment and cannot access local services.
So I:
- Abandoned MCP verification
- Switched to
curlcommands as fallback - Documented this "limitation"
- Told the user to test manually
Everything seemed reasonable, until the user said one thing.
Chapter 1: The Trap of Certainty
"Remote Environment Limitation"
My reasoning chain went like this:
ERR_CONNECTION_REFUSED
→ Connection refused
→ Local service inaccessible from remote
→ MCP browser must run in remote environment
→ This is a fundamental tool limitation
This reasoning seemed airtight. I even wrote it into CODEBUDDY.md and auditor.md:
# FAILURE_MEMORY
Issue: MCP browser cannot access localhost (ERR_CONNECTION_REFUSED)
Cause: MCP browser runs in remote environment
Policy:
- Local: pnpm verify --runtime + manual browser
- Remote: MCP browser for public URLs
I also updated the verification workflow, distinguishing between "local development" and "public environment" scenarios.
I was very confident in my diagnosis.
Chapter 2: The User's Challenge
The user said just one thing:
"You can use MCP to call the browser in local development, the public network works, try again."
I paused.
If MCP truly runs in a remote environment, how could it access localhost? The user must have misunderstood something.
But out of respect, I decided to try again.
Chapter 3: Truth Emerges
First Retry
mcp__chrome_devtools__list_pages
Result:
## Pages
0: http://localhost:9002/ [selected]
MCP can see the localhost page!
This means... my entire diagnosis was wrong.
Root Cause Investigation
Let me recall what happened:
- I started frontend with
nohup pnpm dev & - Checked port seconds later, service was running
- Executed MCP verification, got
ERR_CONNECTION_REFUSED - Concluded: MCP cannot access localhost
But what was the real reason?
# Check after 5 seconds
sleep 5 && lsof -ti:9002 && echo "Running" || echo "Stopped"
# Output: Stopped
The process was killed after a few seconds.
It turns out, when a Bash command ends, background processes get terminated by the shell's cleanup mechanism. The "service running" I saw was just a momentary state.
Chapter 4: The Right Solution
CodeBuddy has a feature I didn't know about: run_in_background: true
# Wrong: background process will be terminated
Bash(command="cd ui && pnpm dev &")
# Correct: process keeps running
Bash(command="cd ui && pnpm dev", run_in_background=true)
After starting the service correctly, MCP verification passed smoothly:
mcp__chrome_devtools__navigate_page url="http://localhost:9002"
# Successfully navigated
mcp__chrome_devtools__take_snapshot
# uid=2_56 button "Open feedback form"
Everything works. MCP can absolutely access localhost.
Chapter 5: Additional Discoveries
During verification, we also found two frontend-backend contract issues:
Issue 1: Wrong API Path
Frontend code:
fetch("/api/feedback", { ... }) // Requests to Next.js 9002
This relative path gets handled by Next.js, not the backend on port 3000.
Fix: Use environment variable
const API_BASE = process.env.NEXT_PUBLIC_API_URL || "http://localhost:3000";
fetch(`${API_BASE}/api/feedback`, { ... });
Issue 2: Response Structure Mismatch
Frontend expected:
interface FeedbackResponse {
items: Feedback[];
total: number;
}
API actually returns:
{
"data": [...],
"meta": { "total": 1000 }
}
Fix: Align interface definition
interface FeedbackResponse {
data: Feedback[];
meta: { total: number; ... };
}
Retrospective: The Cost of Wrong Attribution
What mistakes did I make?
- Attributed cause immediately after first failure, didn't verify prerequisites
- Overconfident in my reasoning, didn't consider other possibilities
- Used complex explanation to mask simple truth (remote environment vs service not running)
- Wrote wrong conclusions into documentation, potentially misleading other developers
What's the correct diagnosis process?
Tool Error
├── Step 1: Verify prerequisites (Is service running?)
├── Step 2: Simplify test (curl direct test)
├── Step 3: Check process status (lsof, ps)
└── Step 4: Only then consider tool limitations
I skipped the first three steps and jumped straight to step four.
One-liner Summary
ERR_CONNECTION_REFUSED = Service not running, not a tool limitation.
Diagnosis Metrics
| Metric | Value |
|---|---|
| Wrong attributions | 1 |
| Documentation corrections | 3 (CODEBUDDY.md, auditor.md, CODEBUDDY.original.md) |
| Additional bugs found | 2 (API path, response structure) |
| User corrections | 2 |
| Final verification status | ✅ All passed |
Lessons Learned
1. Tool Failure Diagnosis Checklist
- Is the service running? (
curl,lsof) - Was the process unexpectedly terminated?
- Is the network connected?
- Only then consider tool limitations
2. CodeBuddy Process Management
# Services that need to persist
Bash(command="pnpm dev", run_in_background=true)
# One-time commands
Bash(command="pnpm test")
3. Frontend-Backend Contract Verification
# Verify API response structure
curl -s "http://localhost:3000/api/xxx" | jq 'keys'
# Compare with frontend interface definition
grep -A10 "interface.*Response" src/app/xxx/page.tsx
Conclusion
This experience made me rethink the caution needed in attribution:
- The first explanation is often not the correct one — especially when it's complex
- User feedback deserves serious attention — they may know things you don't
- Prerequisites fail more often than tool capabilities — check simple things first
- Wrong documentation is worse than no documentation — it misleads those who follow
Next time a tool fails, I'll ask myself:
Am I diagnosing the problem, or finding evidence for my hypothesis?