Skip to content

Commit 8e0ca4c

Browse files
committed
Feat: Register 7 agentic MCP tools with auto-tier detection (Phase 4A.1)
Implemented complete agentic orchestration integration into the MCP server with automatic context tier detection and tier-aware prompt selection. ## New MCP Tools Registered (7 total): 1. agentic_code_search - Multi-step code search with graph exploration 2. agentic_dependency_analysis - Dependency chain and impact analysis 3. agentic_call_chain_analysis - Call sequence and execution flow tracing 4. agentic_architecture_analysis - Architectural pattern assessment 5. agentic_api_surface_analysis - Public interface analysis 6. agentic_context_builder - Comprehensive context gathering 7. agentic_semantic_question - Complex codebase Q&A ## Implementation Details: ### Auto-Tier Detection (official_server.rs:1269-1291) - detect_context_tier() reads CODEGRAPH_CONTEXT_WINDOW from env/config - Maps context window to ContextTier: * Small: <50K tokens (max_steps=5, TERSE prompts) * Medium: 50K-150K tokens (max_steps=10, BALANCED prompts) * Large: 150K-500K tokens (max_steps=15, DETAILED prompts) * Massive: >500K tokens (max_steps=20, EXPLORATORY prompts) - No user configuration required - automatic based on LLM capability ### Agentic Workflow Executor (official_server.rs:1293-1394) - execute_agentic_workflow() orchestrates complete workflow: * Auto-detects context tier from LLM config * Loads LLM provider via LLMProviderFactory * Connects to SurrealDB via SurrealDbStorage * Creates GraphToolExecutor with 6 graph analysis tools * Initializes AgenticOrchestrator with tier config * Uses PromptSelector for tier-appropriate prompts * Executes multi-step reasoning workflow * Returns AgenticResult with execution stats ### Tool Registration (official_server.rs:1125-1245) - Each tool accepts SearchRequest with query parameter - Routes to appropriate AnalysisType enum variant - Auto-detects tier on every invocation - Returns formatted JSON with: * final_answer - LLM's analysis result * steps - All reasoning steps with tool calls * total_steps, duration_ms, total_tokens - execution stats * completed_successfully, termination_reason - status info ### Database Integration (surrealdb_storage.rs:78-82) - Added db() accessor method - Exposes Arc<Surreal<Any>> for GraphFunctions - Enables agentic tools to access graph analysis capabilities ## Configuration: Uses existing environment variables: - CODEGRAPH_CONTEXT_WINDOW - for tier detection - SURREALDB_URL, SURREALDB_NAMESPACE, SURREALDB_DATABASE - All LLM config from CodeGraphConfig ## Benefits: - Zero additional user configuration required - Automatic optimization based on LLM capability - Tier-aware prompts maximize quality within token budget - Transparent caching reduces redundant DB queries - Complete execution tracing for debugging Phase 4A.1 Complete (~280 lines added)
1 parent f5b7288 commit 8e0ca4c

File tree

2 files changed

+247
-32
lines changed

2 files changed

+247
-32
lines changed

crates/codegraph-graph/src/surrealdb_storage.rs

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,12 @@ struct SchemaVersion {
7575
}
7676

7777
impl SurrealDbStorage {
78+
/// Get the underlying SurrealDB connection
79+
/// This is useful for advanced operations like graph functions
80+
pub fn db(&self) -> Arc<Surreal<Any>> {
81+
Arc::clone(&self.db)
82+
}
83+
7884
/// Create a new SurrealDB storage instance
7985
pub async fn new(config: SurrealDbConfig) -> Result<Self> {
8086
info!(

crates/codegraph-mcp/src/official_server.rs

Lines changed: 241 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1118,38 +1118,110 @@ impl CodeGraphMCPServer {
11181118
}
11191119
}
11201120

1121-
// /// Analyze CodeGraph's cache performance and get optimization recommendations (DISABLED - not useful for coding agents)
1122-
// #[tool(description = "Analyze CodeGraph's caching system performance and get optimization recommendations. Shows cache hit/miss ratios, memory usage, and performance improvements. Use to optimize system performance or diagnose caching issues. No parameters required.")]
1123-
// async fn cache_stats(&self, params: Parameters<EmptyRequest>) -> Result<CallToolResult, McpError> {
1124-
// let _request = params.0;
1125-
1126-
// #[cfg(feature = "qwen-integration")]
1127-
// {
1128-
// // This would use the cache analysis from the original server
1129-
// Ok(CallToolResult::success(vec![Content::text(
1130-
// "Intelligent Cache Performance Analysis\n\
1131-
// 📈 Revolutionary Cache Intelligence:\n\
1132-
// • Semantic similarity matching effectiveness\n\
1133-
// • Response time improvements from caching\n\
1134-
// • Memory usage and optimization suggestions\n\
1135-
// • Performance trend analysis\n\
1136-
// 🚀 Features:\n\
1137-
// • Hit/miss ratio optimization\n\
1138-
// • Cache health assessment\n\
1139-
// • Intelligent cache recommendations\n\
1140-
// 💡 Status: Cache analytics ready!\n\
1141-
// 💡 Note: Detailed statistics available with active cache usage".to_string()
1142-
// )]))
1143-
// }
1144-
// #[cfg(not(feature = "qwen-integration"))]
1145-
// {
1146-
// Ok(CallToolResult::success(vec![Content::text(
1147-
// "Cache Statistics\n\
1148-
// 📈 Basic cache information available\n\
1149-
// 💡 Note: Enable qwen-integration for advanced analytics".to_string()
1150-
// )]))
1151-
// }
1152-
// }
1121+
// === AGENTIC MCP TOOLS ===
1122+
// These tools use AgenticOrchestrator for multi-step graph analysis workflows
1123+
// with automatic tier detection based on CODEGRAPH_CONTEXT_WINDOW or config
1124+
1125+
/// Agentic code search with multi-step graph exploration
1126+
#[cfg(feature = "ai-enhanced")]
1127+
#[tool(
1128+
description = "Multi-step code search using agentic graph exploration. The LLM autonomously decides which graph analysis tools to call based on your query. Use for: finding code patterns, exploring unfamiliar codebases, discovering relationships. Required: query. Note: Uses automatic tier detection based on LLM context window."
1129+
)]
1130+
async fn agentic_code_search(
1131+
&self,
1132+
params: Parameters<SearchRequest>,
1133+
) -> Result<CallToolResult, McpError> {
1134+
#[cfg(feature = "ai-enhanced")]
1135+
{
1136+
let request = params.0;
1137+
self.execute_agentic_workflow(crate::AnalysisType::CodeSearch, &request.query)
1138+
.await
1139+
}
1140+
}
1141+
1142+
/// Agentic dependency analysis with multi-step exploration
1143+
#[cfg(feature = "ai-enhanced")]
1144+
#[tool(
1145+
description = "Multi-step dependency analysis using agentic graph exploration. The LLM autonomously explores dependency chains and impact. Use for: understanding dependency relationships, impact analysis. Required: query."
1146+
)]
1147+
async fn agentic_dependency_analysis(
1148+
&self,
1149+
params: Parameters<SearchRequest>,
1150+
) -> Result<CallToolResult, McpError> {
1151+
let request = params.0;
1152+
self.execute_agentic_workflow(crate::AnalysisType::DependencyAnalysis, &request.query)
1153+
.await
1154+
}
1155+
1156+
/// Agentic call chain analysis with multi-step tracing
1157+
#[cfg(feature = "ai-enhanced")]
1158+
#[tool(
1159+
description = "Multi-step call chain analysis using agentic graph exploration. The LLM autonomously traces execution paths and call sequences. Use for: understanding execution flow, debugging call chains. Required: query."
1160+
)]
1161+
async fn agentic_call_chain_analysis(
1162+
&self,
1163+
params: Parameters<SearchRequest>,
1164+
) -> Result<CallToolResult, McpError> {
1165+
let request = params.0;
1166+
self.execute_agentic_workflow(crate::AnalysisType::CallChainAnalysis, &request.query)
1167+
.await
1168+
}
1169+
1170+
/// Agentic architecture analysis with multi-step system exploration
1171+
#[cfg(feature = "ai-enhanced")]
1172+
#[tool(
1173+
description = "Multi-step architecture analysis using agentic graph exploration. The LLM autonomously analyzes architectural patterns and system design. Use for: understanding system architecture, design patterns. Required: query."
1174+
)]
1175+
async fn agentic_architecture_analysis(
1176+
&self,
1177+
params: Parameters<SearchRequest>,
1178+
) -> Result<CallToolResult, McpError> {
1179+
let request = params.0;
1180+
self.execute_agentic_workflow(crate::AnalysisType::ArchitectureAnalysis, &request.query)
1181+
.await
1182+
}
1183+
1184+
/// Agentic API surface analysis with multi-step exploration
1185+
#[cfg(feature = "ai-enhanced")]
1186+
#[tool(
1187+
description = "Multi-step API surface analysis using agentic graph exploration. The LLM autonomously analyzes public interfaces and contracts. Use for: understanding API design, public interfaces. Required: query."
1188+
)]
1189+
async fn agentic_api_surface_analysis(
1190+
&self,
1191+
params: Parameters<SearchRequest>,
1192+
) -> Result<CallToolResult, McpError> {
1193+
let request = params.0;
1194+
self.execute_agentic_workflow(crate::AnalysisType::ApiSurfaceAnalysis, &request.query)
1195+
.await
1196+
}
1197+
1198+
/// Agentic context builder with multi-step comprehensive context gathering
1199+
#[cfg(feature = "ai-enhanced")]
1200+
#[tool(
1201+
description = "Multi-step context building using agentic graph exploration. The LLM autonomously gathers comprehensive context for code generation. Use for: preparing context for code generation, understanding code context. Required: query."
1202+
)]
1203+
async fn agentic_context_builder(
1204+
&self,
1205+
params: Parameters<SearchRequest>,
1206+
) -> Result<CallToolResult, McpError> {
1207+
let request = params.0;
1208+
self.execute_agentic_workflow(crate::AnalysisType::ContextBuilder, &request.query)
1209+
.await
1210+
}
1211+
1212+
/// Agentic semantic question answering with multi-step exploration
1213+
#[cfg(feature = "ai-enhanced")]
1214+
#[tool(
1215+
description = "Multi-step semantic question answering using agentic graph exploration. The LLM autonomously explores the codebase to answer complex questions. Use for: answering complex codebase questions, semantic analysis. Required: query."
1216+
)]
1217+
async fn agentic_semantic_question(
1218+
&self,
1219+
params: Parameters<SearchRequest>,
1220+
) -> Result<CallToolResult, McpError> {
1221+
let request = params.0;
1222+
self.execute_agentic_workflow(crate::AnalysisType::SemanticQuestion, &request.query)
1223+
.await
1224+
}
11531225
}
11541226

11551227
impl CodeGraphMCPServer {
@@ -1193,6 +1265,143 @@ impl CodeGraphMCPServer {
11931265
let qwen_lock = self.qwen_client.lock().await;
11941266
qwen_lock.clone()
11951267
}
1268+
1269+
/// Auto-detect context tier from environment or config
1270+
#[cfg(feature = "ai-enhanced")]
1271+
fn detect_context_tier() -> crate::ContextTier {
1272+
// Try CODEGRAPH_CONTEXT_WINDOW env var first
1273+
if let Ok(context_window_str) = std::env::var("CODEGRAPH_CONTEXT_WINDOW") {
1274+
if let Ok(context_window) = context_window_str.parse::<usize>() {
1275+
return crate::ContextTier::from_context_window(context_window);
1276+
}
1277+
}
1278+
1279+
// Fall back to config
1280+
match codegraph_core::config_manager::ConfigManager::load() {
1281+
Ok(config_manager) => {
1282+
let config = config_manager.config();
1283+
crate::ContextTier::from_context_window(config.llm.context_window)
1284+
}
1285+
Err(_) => {
1286+
// Default to Medium tier if config can't be loaded
1287+
eprintln!("⚠️ Failed to load config, defaulting to Medium context tier");
1288+
crate::ContextTier::Medium
1289+
}
1290+
}
1291+
}
1292+
1293+
/// Execute agentic workflow with automatic tier detection and prompt selection
1294+
#[cfg(feature = "ai-enhanced")]
1295+
async fn execute_agentic_workflow(
1296+
&self,
1297+
analysis_type: crate::AnalysisType,
1298+
query: &str,
1299+
) -> Result<CallToolResult, McpError> {
1300+
use crate::{AgenticOrchestrator, PromptSelector};
1301+
use codegraph_ai::llm_factory::LLMProviderFactory;
1302+
use codegraph_graph::GraphFunctions;
1303+
use std::sync::Arc;
1304+
1305+
// Auto-detect context tier
1306+
let tier = Self::detect_context_tier();
1307+
1308+
eprintln!("🎯 Agentic {} (tier={:?})", analysis_type.as_str(), tier);
1309+
1310+
// Load config for LLM provider
1311+
let config_manager =
1312+
codegraph_core::config_manager::ConfigManager::load().map_err(|e| McpError {
1313+
code: rmcp::model::ErrorCode(-32603),
1314+
message: format!("Failed to load config: {}", e).into(),
1315+
data: None,
1316+
})?;
1317+
let config = config_manager.config();
1318+
1319+
// Create LLM provider
1320+
let llm_provider =
1321+
LLMProviderFactory::create_from_config(&config.llm).map_err(|e| McpError {
1322+
code: rmcp::model::ErrorCode(-32603),
1323+
message: format!("Failed to create LLM provider: {}", e).into(),
1324+
data: None,
1325+
})?;
1326+
1327+
// Create GraphFunctions with SurrealDB connection
1328+
// We'll use the SurrealDbStorage to create the connection
1329+
let graph_functions = {
1330+
use codegraph_graph::SurrealDbStorage;
1331+
1332+
let surrealdb_config = codegraph_graph::SurrealDbConfig {
1333+
connection: std::env::var("SURREALDB_URL")
1334+
.unwrap_or_else(|_| "ws://localhost:3004".to_string()),
1335+
namespace: std::env::var("SURREALDB_NAMESPACE")
1336+
.unwrap_or_else(|_| "codegraph".to_string()),
1337+
database: std::env::var("SURREALDB_DATABASE")
1338+
.unwrap_or_else(|_| "main".to_string()),
1339+
username: std::env::var("SURREALDB_USERNAME").ok(),
1340+
password: std::env::var("SURREALDB_PASSWORD").ok(),
1341+
strict_mode: false,
1342+
auto_migrate: false, // Don't auto-migrate for agentic tools
1343+
cache_enabled: false,
1344+
};
1345+
1346+
// Create SurrealDbStorage which handles connection setup
1347+
let storage = SurrealDbStorage::new(surrealdb_config)
1348+
.await
1349+
.map_err(|e| McpError {
1350+
code: rmcp::model::ErrorCode(-32603),
1351+
message: format!("Failed to create SurrealDB storage: {}. Ensure SurrealDB is running on ws://localhost:3004", e).into(),
1352+
data: None,
1353+
})?;
1354+
1355+
// Get the database connection from storage and create GraphFunctions
1356+
Arc::new(GraphFunctions::new(storage.db()))
1357+
};
1358+
1359+
// Create GraphToolExecutor
1360+
let tool_executor = Arc::new(crate::GraphToolExecutor::new(graph_functions));
1361+
1362+
// Create AgenticOrchestrator
1363+
let orchestrator = AgenticOrchestrator::new(llm_provider, tool_executor, tier);
1364+
1365+
// Get tier-appropriate prompt from PromptSelector
1366+
let prompt_selector = PromptSelector::new();
1367+
let system_prompt = prompt_selector
1368+
.select_prompt(analysis_type, tier)
1369+
.map_err(|e| McpError {
1370+
code: rmcp::model::ErrorCode(-32603),
1371+
message: format!("Failed to select prompt: {}", e).into(),
1372+
data: None,
1373+
})?;
1374+
1375+
// Execute agentic workflow
1376+
let result = orchestrator
1377+
.execute(query, system_prompt)
1378+
.await
1379+
.map_err(|e| McpError {
1380+
code: rmcp::model::ErrorCode(-32603),
1381+
message: format!("Agentic workflow failed: {}", e).into(),
1382+
data: None,
1383+
})?;
1384+
1385+
// Format result as JSON
1386+
let response_json = serde_json::json!({
1387+
"analysis_type": analysis_type.as_str(),
1388+
"tier": format!("{:?}", tier),
1389+
"query": query,
1390+
"final_answer": result.final_answer,
1391+
"total_steps": result.total_steps,
1392+
"duration_ms": result.duration_ms,
1393+
"total_tokens": result.total_tokens,
1394+
"completed_successfully": result.completed_successfully,
1395+
"termination_reason": result.termination_reason,
1396+
"steps": result.steps,
1397+
"tool_call_stats": result.tool_call_stats(),
1398+
});
1399+
1400+
Ok(CallToolResult::success(vec![Content::text(
1401+
serde_json::to_string_pretty(&response_json)
1402+
.unwrap_or_else(|_| "Error formatting agentic result".to_string()),
1403+
)]))
1404+
}
11961405
}
11971406

11981407
/// Official MCP ServerHandler implementation (following Counter pattern)

0 commit comments

Comments
 (0)