Skip to content

Commit c9b4780

Browse files
committed
Claude Code: Update documentation
Session duration: unknownm s Files changed: - Added: 1 - Modified: 0 - Deleted: 0 Summary: .claude/summaries/20251107T095251Z_summary.md
1 parent 2ce65b0 commit c9b4780

File tree

1 file changed

+345
-0
lines changed

1 file changed

+345
-0
lines changed

TESTING.md

Lines changed: 345 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,345 @@
1+
# Testing CodeGraph MCP Server
2+
3+
This guide explains how to test the CodeGraph MCP server after indexing your project.
4+
5+
## Prerequisites
6+
7+
1. **Python 3.8+** with pip
8+
2. **SurrealDB** running on port 3004
9+
3. **CodeGraph binary** built and ready
10+
4. **Project indexed** with `codegraph index`
11+
12+
## Setup
13+
14+
### 1. Install Python Dependencies
15+
16+
```bash
17+
pip install -r requirements-test.txt
18+
```
19+
20+
This installs `python-dotenv` for loading `.env` configuration.
21+
22+
### 2. Verify Your .env Configuration
23+
24+
Make sure your `.env` file has the necessary settings:
25+
26+
```bash
27+
# LLM Provider Configuration
28+
CODEGRAPH_LLM_PROVIDER=openai # or anthropic, ollama, lmstudio
29+
CODEGRAPH_MODEL=gpt-5-codex # your model
30+
31+
# API Keys (as needed)
32+
OPENAI_API_KEY=sk-... # if using OpenAI
33+
ANTHROPIC_API_KEY=sk-ant-... # if using Anthropic
34+
35+
# Embedding Provider
36+
CODEGRAPH_EMBEDDING_PROVIDER=jina # or ollama, openai
37+
JINA_API_KEY=jina_... # if using Jina
38+
39+
# Optional: Ollama settings
40+
CODEGRAPH_OLLAMA_URL=http://localhost:11434
41+
```
42+
43+
### 3. Index Your Project
44+
45+
Before testing, make sure your codebase is indexed:
46+
47+
```bash
48+
# Index the current project
49+
codegraph index .
50+
51+
# Or index a specific path
52+
codegraph index /path/to/your/code
53+
```
54+
55+
This creates the `.codegraph/` directory with the indexed data.
56+
57+
## Running Tests
58+
59+
### Basic Usage
60+
61+
Simply run the test script:
62+
63+
```bash
64+
python3 test_mcp_tools.py
65+
```
66+
67+
The script will:
68+
1. ✅ Load configuration from `.env`
69+
2. ✅ Display the configuration being used
70+
3. ✅ Start the MCP server
71+
4. ✅ Run MCP handshake (initialize + notifications)
72+
5. ✅ Execute 6 test tool calls
73+
6. ✅ Automatically extract node UUIDs for graph operations
74+
75+
### What Gets Tested
76+
77+
The script tests the following MCP tools:
78+
79+
1. **`search`** - Semantic code search
80+
- Query: "configuration management"
81+
- Tests basic semantic search functionality
82+
83+
2. **`vector_search`** - Vector similarity search
84+
- Query: "async function implementation"
85+
- Returns code nodes with embeddings
86+
- UUID extracted for graph operations
87+
88+
3. **`graph_neighbors`** - Graph traversal
89+
- Uses UUID from vector_search
90+
- Finds connected nodes in the code graph
91+
92+
4. **`graph_traverse`** - Deep graph exploration
93+
- Uses UUID from vector_search
94+
- Traverses 2 levels deep
95+
96+
5. **`semantic_intelligence`** - LLM-powered analysis
97+
- Query: "How is configuration loaded from .env files?"
98+
- Tests LLM integration with your configured provider
99+
100+
6. **`impact_analysis`** - Change impact prediction
101+
- Target: `load` function in `config_manager.rs`
102+
- Analyzes what would be affected by changes
103+
104+
### Expected Output
105+
106+
```
107+
✓ Loaded configuration from /path/to/.env
108+
109+
========================================================================
110+
CodeGraph Configuration:
111+
========================================================================
112+
LLM Provider: openai
113+
LLM Model: gpt-5-codex
114+
Embedding Provider: jina
115+
Protocol Version: 2025-06-18
116+
========================================================================
117+
118+
Starting CodeGraph MCP Server...
119+
120+
→ {"jsonrpc":"2.0","id":1,"method":"initialize",...}
121+
========================================================================
122+
[Server response with capabilities]
123+
124+
### 1. search (semantic search) ###
125+
→ {"jsonrpc":"2.0","method":"tools/call","params":{"name":"search"...
126+
========================================================================
127+
[Search results]
128+
129+
### 2. vector_search ###
130+
→ {"jsonrpc":"2.0","method":"tools/call","params":{"name":"vector_search"...
131+
========================================================================
132+
✓ Detected node UUID: abc123...
133+
[Vector search results]
134+
135+
### 3. graph_neighbors (auto-fill node UUID) ###
136+
Using node UUID from vector_search: abc123...
137+
→ {"jsonrpc":"2.0","method":"tools/call","params":{"name":"graph_neighbors"...
138+
========================================================================
139+
[Graph neighbors]
140+
141+
... (and so on)
142+
143+
✅ Finished all tests.
144+
```
145+
146+
## Customizing Tests
147+
148+
### Override Configuration
149+
150+
You can override `.env` settings with environment variables:
151+
152+
```bash
153+
# Use a different model
154+
CODEGRAPH_MODEL=gpt-4o python3 test_mcp_tools.py
155+
156+
# Use local Ollama
157+
CODEGRAPH_LLM_PROVIDER=ollama \
158+
CODEGRAPH_MODEL=qwen2.5-coder:14b \
159+
python3 test_mcp_tools.py
160+
```
161+
162+
### Use Custom Binary
163+
164+
Point to a specific codegraph binary:
165+
166+
```bash
167+
# Use debug build
168+
CODEGRAPH_BIN=./target/debug/codegraph python3 test_mcp_tools.py
169+
170+
# Use release build
171+
CODEGRAPH_BIN=./target/release/codegraph python3 test_mcp_tools.py
172+
173+
# Use custom command
174+
CODEGRAPH_CMD="cargo run -p codegraph-mcp --bin codegraph --" python3 test_mcp_tools.py
175+
```
176+
177+
### Modify Test Queries
178+
179+
Edit `test_mcp_tools.py` to change the test queries:
180+
181+
```python
182+
TESTS = [
183+
("1. search (semantic search)", {
184+
"jsonrpc": "2.0", "method": "tools/call",
185+
"params": {"name": "search", "arguments": {
186+
"query": "YOUR QUERY HERE", # <-- Change this
187+
"limit": 5 # <-- Or this
188+
}},
189+
"id": 101
190+
}),
191+
# ... more tests
192+
]
193+
```
194+
195+
## Troubleshooting
196+
197+
### Error: "python-dotenv not installed"
198+
199+
```bash
200+
pip install python-dotenv
201+
```
202+
203+
### Error: "No .env file found"
204+
205+
Create a `.env` file in the project root with your configuration:
206+
207+
```bash
208+
cat > .env << 'EOF'
209+
CODEGRAPH_LLM_PROVIDER=openai
210+
CODEGRAPH_MODEL=gpt-4o
211+
OPENAI_API_KEY=sk-your-key-here
212+
EOF
213+
```
214+
215+
### Error: "No UUID found in vector_search output"
216+
217+
This means the indexed database doesn't have nodes yet. Index your project first:
218+
219+
```bash
220+
codegraph index .
221+
```
222+
223+
### Error: Connection refused
224+
225+
Make sure SurrealDB is running:
226+
227+
```bash
228+
# Start SurrealDB
229+
surreal start --bind 0.0.0.0:3004 --user root --pass root file://data/surreal.db
230+
```
231+
232+
### Server startup fails
233+
234+
Check if the codegraph binary is available:
235+
236+
```bash
237+
# Test if binary exists
238+
which codegraph
239+
240+
# Or use debug build
241+
CODEGRAPH_BIN=./target/debug/codegraph python3 test_mcp_tools.py
242+
```
243+
244+
### Slow semantic_intelligence responses
245+
246+
The `semantic_intelligence` tool uses your configured LLM provider. Response time depends on:
247+
- LLM provider speed (cloud vs local)
248+
- Model size (larger = slower but better)
249+
- Context size (more context = slower)
250+
251+
Reduce `max_context_tokens` in the test for faster responses:
252+
253+
```python
254+
("5. semantic_intelligence", {
255+
"params": {"name": "semantic_intelligence", "arguments": {
256+
"query": "...",
257+
"max_context_tokens": 5000 # Reduced from 10000
258+
}},
259+
"id": 105
260+
}),
261+
```
262+
263+
## Advanced Testing
264+
265+
### Test with Different Providers
266+
267+
```bash
268+
# Test with OpenAI
269+
CODEGRAPH_LLM_PROVIDER=openai python3 test_mcp_tools.py
270+
271+
# Test with Anthropic
272+
CODEGRAPH_LLM_PROVIDER=anthropic \
273+
CODEGRAPH_MODEL=claude-3-5-sonnet-20241022 \
274+
python3 test_mcp_tools.py
275+
276+
# Test with local Ollama
277+
CODEGRAPH_LLM_PROVIDER=ollama \
278+
CODEGRAPH_MODEL=qwen2.5-coder:14b \
279+
python3 test_mcp_tools.py
280+
```
281+
282+
### Capture Output for Analysis
283+
284+
```bash
285+
# Save full output
286+
python3 test_mcp_tools.py 2>&1 | tee test_output.log
287+
288+
# Filter for errors only
289+
python3 test_mcp_tools.py 2>&1 | grep -i error
290+
291+
# Show only test summaries
292+
python3 test_mcp_tools.py 2>&1 | grep "^###"
293+
```
294+
295+
### Debug Mode
296+
297+
For more verbose output, modify the script to show raw JSON:
298+
299+
```python
300+
# In test_mcp_tools.py, change:
301+
def send(proc, obj, wait=2.0, show=True): # Keep show=True for debugging
302+
```
303+
304+
## Integration with CI/CD
305+
306+
You can use this script in CI/CD pipelines:
307+
308+
```yaml
309+
# .github/workflows/test-mcp.yml
310+
name: Test MCP Server
311+
312+
on: [push, pull_request]
313+
314+
jobs:
315+
test:
316+
runs-on: ubuntu-latest
317+
steps:
318+
- uses: actions/checkout@v2
319+
320+
- name: Install dependencies
321+
run: pip install -r requirements-test.txt
322+
323+
- name: Start SurrealDB
324+
run: |
325+
surreal start --bind 0.0.0.0:3004 &
326+
sleep 2
327+
328+
- name: Build CodeGraph
329+
run: cargo build --release -p codegraph-mcp
330+
331+
- name: Index test project
332+
run: ./target/release/codegraph index .
333+
334+
- name: Run MCP tests
335+
env:
336+
CODEGRAPH_LLM_PROVIDER: ollama
337+
CODEGRAPH_MODEL: qwen2.5-coder:7b
338+
run: python3 test_mcp_tools.py
339+
```
340+
341+
## See Also
342+
343+
- [SETUP_VERIFICATION.md](SETUP_VERIFICATION.md) - Setup verification guide
344+
- [schema/README.md](schema/README.md) - Database schema documentation
345+
- [CHANGES_SUMMARY.md](CHANGES_SUMMARY.md) - Recent changes summary

0 commit comments

Comments
 (0)