-
-
Notifications
You must be signed in to change notification settings - Fork 50
Feature/custom ai roles #310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/custom ai roles #310
Conversation
📝 WalkthroughWalkthroughAdds role-based personas and loading to Cortex: three YAML role definitions, a role loader, CLI and LLM interpreter updates to accept/use a role/system_prompt, planner and confirmation utilities, and new parallel-LLM tests and a small integration test change. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as CortexCLI
participant Loader as RoleLoader
participant Interp as Interpreter
participant LLM as LLM Provider
User->>CLI: run command (e.g., --role security install ...)
CLI->>Loader: load_role("security")
Loader-->>CLI: role dict (system_prompt)
CLI->>Interp: init / parse_with_context(role="security", user_input)
Interp->>Interp: _get_system_prompt() (prefix with role.system_prompt)
Interp->>LLM: call provider with role-aware prompt
LLM-->>Interp: JSON response containing "commands"
Interp->>Interp: _parse_commands() -> list[str]
Interp-->>CLI: commands
CLI-->>User: display commands / next actions
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
requirements.txt (1)
11-15: Remove duplicate PyYAML entry.The requirements file contains two entries for PyYAML:
- Line 11:
pyyaml>=6.0.0- Line 15:
PyYAML==6.0.3Package names are case-insensitive in pip, so these refer to the same package. Having both creates ambiguity during dependency resolution. Keep only the exact pin if that's the desired constraint, or keep only the range constraint if flexibility is needed.
🔎 Proposed fix
Remove one of the duplicate entries. If the exact version pin is required:
# Configuration -pyyaml>=6.0.0 +# Removed duplicate - using exact pin below # Type hints for older Python versions typing-extensions>=4.0.0 PyYAML==6.0.3Or if the range is preferred:
# Configuration pyyaml>=6.0.0 # Type hints for older Python versions typing-extensions>=4.0.0 -PyYAML==6.0.3LLM/interpreter.py (1)
14-26: Add type hints and docstring for the updated constructor.The constructor now accepts a
roleparameter but lacks type hints for the new attributes (self.role,self.role_name,self.system_prompt) and doesn't document the new parameter in a docstring.🔎 Proposed fix
def __init__( self, api_key: str, provider: str = "openai", role: str = "default", model: Optional[str] = None ): + """ + Initialize the CommandInterpreter. + + Args: + api_key: API key for the LLM provider + provider: LLM provider (openai, claude, ollama) + role: Role name for specialized system prompts (default: "default") + model: Optional model override + """ from cortex.roles.loader import load_role self.api_key = api_key self.provider = APIProvider(provider.lower()) self.role_name = role - self.role = load_role(role) - self.system_prompt = self.role.get("system_prompt", "") + self.role: dict = load_role(role) + self.system_prompt: str = self.role.get("system_prompt", "") if model: self.model = modelAs per coding guidelines, type hints and docstrings are required for all public APIs.
🧹 Nitpick comments (7)
src/intent/detector.py (2)
7-11: Consider more specific type hint fordetailsfield.The
detailsfield usesdict | Nonewhich accepts any dictionary. Consider usingdict[str, Any]or a more specific type to improve type safety and clarity.🔎 Proposed type hint refinement
+from typing import Any + @dataclass class Intent: action: str target: str - details: dict | None = None + details: dict[str, Any] | None = None
32-48: Consider refactoring GPU special-case logic for clarity.The GPU handling is split across two locations (lines 34-35 skip it in the main loop, lines 43-48 handle it separately). While functionally correct, this pattern increases cognitive load.
Consider extracting GPU logic into a dedicated helper method for better readability.
src/test_intent_detection.py (1)
4-17: LGTM! Basic intent detection tests are correct.The tests appropriately validate the core behavior of IntentDetector:
- Correctly detects multiple package intents and GPU configuration
- Validates empty input handling
Optional: Consider adding test cases for edge scenarios such as:
- Verify/check action detection
- Case sensitivity handling
- Partial keyword matches
cortex/cli.py (2)
46-51: Fix comment placement and clarify initialization.The comment on Line 49 appears misplaced. It references "Lazy initialization" but is associated with
self.prefs_manager = None, while the actual lazy initialization logic is in_get_prefs_manager()(lines 395-399). The comment on Line 50 should clarify thatself.rolestores the role name.🔎 Proposed fix
def __init__(self, verbose: bool = False, role: str = "default"): self.spinner_chars = ['⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇', '⠏'] self.spinner_idx = 0 - self.prefs_manager = None - self.role = role # Lazy initialization + self.prefs_manager = None # Lazy initialization via _get_prefs_manager() + self.role = role self.verbose = verbose
138-139: Consider adding a public save method to NotificationManager.Lines 138, 144, and 163 directly call the private
_save_config()method. While the inline comment acknowledges this, using a public API would be more maintainable.Since
NotificationManageris not part of this file review, consider creating a follow-up issue to add a publicsave()method toNotificationManagerto avoid relying on private methods.Also applies to: 144-144, 163-163
LLM/interpreter.py (1)
58-79: Add docstring updates for modified system prompt behavior.The
_get_system_promptmethod now composes the final prompt from the role'ssystem_promptbut lacks documentation explaining this behavior.🔎 Proposed fix
def _get_system_prompt(self) -> str: + """ + Construct the system prompt by combining role-specific prompt with base rules. + + Returns: + The composed system prompt string + """ base_prompt = """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands.cortex/roles/loader.py (1)
16-31: Add error handling for YAML parsing and improve docstring.The
load_rolefunction lacks comprehensive error handling for YAML parsing errors and file I/O issues. Additionally, the docstring should document the exception that can be raised.🔎 Proposed fix
def load_role(role_name: str) -> dict: """ Load a role YAML by name. Falls back to default if role not found. + + Args: + role_name: Name of the role to load (without .yaml extension) + + Returns: + Dictionary containing role configuration + + Raises: + RoleNotFoundError: If default role not found + yaml.YAMLError: If role file contains invalid YAML """ roles_dir = get_roles_dir() role_file = os.path.join(roles_dir, f"{role_name}.yaml") if not os.path.exists(role_file): if role_name != "default": # Fallback to default role return load_role("default") raise RoleNotFoundError("Default role not found") - with open(role_file, "r", encoding="utf-8") as f: - return yaml.safe_load(f) + try: + with open(role_file, "r", encoding="utf-8") as f: + return yaml.safe_load(f) + except yaml.YAMLError as e: + raise yaml.YAMLError(f"Invalid YAML in role file {role_file}: {e}") + except IOError as e: + raise IOError(f"Failed to read role file {role_file}: {e}")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
.gitignore(0 hunks)LLM/interpreter.py(3 hunks)cortex/cli.py(4 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)requirements.txt(1 hunks)src/intent/clarifier.py(1 hunks)src/intent/context.py(1 hunks)src/intent/detector.py(1 hunks)src/intent/llm_agent.py(1 hunks)src/intent/planner.py(1 hunks)src/test_clarifier.py(1 hunks)src/test_context.py(1 hunks)src/test_intent_detection.py(1 hunks)src/test_llm_agent.py(1 hunks)src/test_planner.py(1 hunks)
💤 Files with no reviewable changes (1)
- .gitignore
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
src/test_intent_detection.pysrc/intent/detector.pysrc/test_clarifier.pysrc/intent/planner.pycortex/cli.pysrc/test_llm_agent.pysrc/intent/clarifier.pysrc/test_context.pysrc/intent/context.pysrc/intent/llm_agent.pyLLM/interpreter.pycortex/roles/loader.pysrc/test_planner.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
requirements.txt
🧬 Code graph analysis (10)
src/test_intent_detection.py (1)
src/intent/detector.py (2)
IntentDetector(14-50)detect(28-50)
src/test_clarifier.py (2)
src/intent/clarifier.py (2)
Clarifier(8-39)needs_clarification(14-39)src/intent/detector.py (2)
IntentDetector(14-50)detect(28-50)
src/intent/planner.py (1)
src/intent/detector.py (1)
Intent(8-11)
src/test_llm_agent.py (2)
src/intent/llm_agent.py (2)
LLMIntentAgent(17-173)process(47-90)src/intent/clarifier.py (1)
needs_clarification(14-39)
src/intent/clarifier.py (1)
src/intent/detector.py (1)
Intent(8-11)
src/test_context.py (2)
src/intent/context.py (7)
SessionContext(8-71)set_gpu(25-26)add_intents(35-36)add_installed(45-47)get_gpu(28-29)is_installed(49-50)get_previous_intents(38-39)src/intent/detector.py (1)
Intent(8-11)
src/intent/context.py (1)
src/intent/detector.py (1)
Intent(8-11)
src/intent/llm_agent.py (5)
src/intent/clarifier.py (2)
Clarifier(8-39)needs_clarification(14-39)src/intent/context.py (4)
SessionContext(8-71)add_clarification(56-57)add_intents(35-36)get_gpu(28-29)src/intent/detector.py (3)
Intent(8-11)IntentDetector(14-50)detect(28-50)src/intent/planner.py (2)
InstallationPlanner(6-51)build_plan(10-51)src/test_llm_agent.py (1)
create(5-12)
LLM/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(16-31)
src/test_planner.py (2)
src/intent/detector.py (1)
Intent(8-11)src/intent/planner.py (2)
InstallationPlanner(6-51)build_plan(10-51)
🔇 Additional comments (9)
cortex/roles/security.yaml (1)
1-7: LGTM! Well-structured security role definition.The security role configuration is clear and concise. The system prompt appropriately emphasizes security best practices, least privilege, and risk mitigation.
cortex/roles/default.yaml (1)
1-6: LGTM! Clean default role definition.The default role configuration provides appropriate general-purpose guidance for the AI assistant. The system prompt balances safety, clarity, and conciseness effectively.
cortex/roles/devops.yaml (1)
1-6: LGTM! Well-crafted DevOps role definition.The devops role configuration appropriately emphasizes infrastructure best practices. The focus on idempotency and infrastructure-as-code aligns well with DevOps principles.
src/test_context.py (1)
5-14: LGTM! Context storage test validates core functionality.The test appropriately exercises SessionContext's key operations:
- GPU tracking
- Intent accumulation
- Package installation state
All assertions are correct and meaningful.
src/test_clarifier.py (1)
5-13: LGTM! Clarifier test validates expected behavior.The test correctly verifies that the Clarifier produces a clarification question for ambiguous ML-related requests that don't match specific package keywords.
cortex/roles/loader.py (1)
9-13: Add docstring for get_roles_dir.The
get_roles_dirfunction lacks a proper docstring explaining its purpose and return value.🔎 Proposed fix
def get_roles_dir(): """ - Returns the directory where built-in roles are stored. + Get the directory where built-in roles are stored. + + Returns: + Absolute path to the roles directory """ return os.path.dirname(__file__)As per coding guidelines, docstrings are required for all public APIs.
Likely an incorrect or invalid review comment.
LLM/interpreter.py (1)
21-21: Add error handling for role loading.The
load_rolefunction called at line 25 can raiseRoleNotFoundError(if default role is missing) or YAML parsing errors (if role file contains invalid YAML), but these exceptions are not caught. Wrap the call in a try-except block and provide appropriate error messages to the CLI user.src/intent/llm_agent.py (2)
1-14: LGTM: Clean imports with safe optional dependency handling.The safe import pattern for the Anthropic SDK is appropriate for an optional dependency, and the fallback to
Noneenables graceful degradation when the SDK is unavailable.
17-25: LGTM: Clear class documentation.The docstring effectively describes the orchestrator's responsibilities and scope.
b98145b to
d912dbc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/cli.py (1)
117-178: Add docstring and consider exposing a public save method.The
notifymethod is a public API but lacks a docstring, which violates the coding guidelines. Additionally, the method calls the private_save_config()method on lines 138, 144, and 163. While the comments acknowledge this, consider requesting a publicsave()method fromNotificationManagerfor better encapsulation.🔎 Proposed improvements
Add a docstring:
def notify(self, args): - """Handle notification commands""" + """ + Handle notification commands (config, enable, disable, dnd, send). + + Args: + args: Parsed command-line arguments containing notify_action and related parameters + + Returns: + int: Exit code (0 for success, 1 for error) + """For the private method usage, consider adding a feature request to expose a public save method in
NotificationManager, then replace:mgr._save_config()with:
mgr.save()As per coding guidelines, docstrings are required for all public APIs.
LLM/interpreter.py (1)
14-27: Add error handling for role loading and consider type hints.The
load_role(role)call on line 26 has no error handling. If the role file is malformed or the default role is missing, an unhandled exception will propagate. Consider wrapping this in a try-except block with a user-friendly error message.Additionally,
self.rolelacks an explicit type hint. Based on the loader returning a dict, consider adding type hints for clarity.🔎 Proposed improvements
Add error handling:
def __init__( self, api_key: str, provider: str = "openai", role: str = "default", model: Optional[str] = None, ): from cortex.roles.loader import load_role self.api_key = api_key self.provider = APIProvider(provider.lower()) self.role_name = role - self.role = load_role(role) - self.system_prompt = self.role.get("system_prompt", "") + try: + self.role: Dict[str, Any] = load_role(role) + self.system_prompt = self.role.get("system_prompt", "") + except Exception as e: + raise ValueError(f"Failed to load role '{role}': {str(e)}")As per coding guidelines, type hints are required.
♻️ Duplicate comments (1)
cortex/cli.py (1)
564-564: Fix PEP 8 formatting for argument definition.The
--roleargument definition lacks proper spacing around parameters, which violates PEP 8 style guidelines. This issue was previously flagged.🔎 Proposed fix
- parser.add_argument("--role",type=str,default="default",help="AI role/persona to use (e.g., default, security, devops)") + parser.add_argument("--role", type=str, default="default", help="AI role/persona to use (e.g., default, security, devops)")As per coding guidelines, follow PEP 8 style guide.
🧹 Nitpick comments (1)
cortex/cli.py (1)
46-46: Clarify the misleading comment about lazy initialization.The comment on line 49 states "# Lazy initialization" but this applies only to
prefs_manager(line 49), not torole(line 50). The role is initialized immediately in the constructor, not lazily.🔎 Proposed fix to clarify the comment
- self.prefs_manager = None - self.role = role # Lazy initialization + self.prefs_manager = None # Lazy initialization + self.role = roleAs per coding guidelines, follow PEP 8 style guide.
Also applies to: 49-50
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
LLM/interpreter.py(6 hunks)cortex/cli.py(4 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- cortex/roles/loader.py
- cortex/roles/default.yaml
- cortex/roles/devops.yaml
- cortex/roles/security.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
LLM/interpreter.pycortex/cli.py
🧬 Code graph analysis (2)
LLM/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(16-31)
cortex/cli.py (1)
LLM/interpreter.py (1)
CommandInterpreter(13-216)
🔇 Additional comments (4)
cortex/cli.py (2)
203-203: LGTM: Role parameter correctly passed to interpreter.The role is properly passed from the CLI instance to the
CommandInterpreter, enabling role-based prompt customization for install operations.
628-628: LGTM: CLI instantiation correctly wired with role parameter.The CLI instance is correctly initialized with both
verboseandrolearguments from the parsed command-line args.LLM/interpreter.py (2)
65-86: LGTM: Role-based system prompt composition is well-implemented.The
_get_system_promptmethod correctly composes the final system prompt by prepending the role-specific prompt (if present) before the base command-generation rules. This allows role personas to specialize behavior while maintaining the core safety and formatting requirements.
40-64: LGTM: Formatting improvements enhance code readability.The formatting changes throughout the file (blank lines, trailing commas, consistent spacing) improve PEP 8 compliance and readability without altering functionality.
As per coding guidelines, follow PEP 8 style guide.
Also applies to: 87-150
|
Hey @pavanimanchala53! This PR has merge conflicts. Can you please rebase on the latest |
d912dbc to
fe6976d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cortex/roles/loader.py (1)
5-6: Add docstring to the exception class.The
RoleNotFoundErrorexception class lacks a docstring explaining when it's raised.🔎 Proposed fix
class RoleNotFoundError(Exception): + """Raised when a role file cannot be found and no default fallback exists.""" passAs per coding guidelines, docstrings are required for all public APIs.
🧹 Nitpick comments (3)
cortex/roles/loader.py (1)
16-31: Remove unnecessary file mode argument.The function logic is correct, including the fallback mechanism and safe YAML parsing. However, Line 30 specifies an unnecessary
"r"mode argument (it's the default).🔎 Proposed fix for style improvement
- with open(role_file, "r", encoding="utf-8") as f: + with open(role_file, encoding="utf-8") as f: return yaml.safe_load(f)Note: This is flagged by Ruff UP015 static analysis.
cortex/cli.py (2)
34-40: Misleading comment and potential dead code.Line 38: The comment
# Lazy initializationis misleading sinceself.roleis directly assigned, not lazily initialized. Lazy initialization would involve computing the value only when first accessed.Line 40:
self.offline = Falseappears to be unused based on the PR summary indicating offline mode was removed. If this is dead code, it should be removed.🔎 Proposed fix
self.prefs_manager = None - self.role = role # Lazy initialization + self.role = role self.verbose = verbose - self.offline = False
106-172: Good implementation with a minor architectural concern.The notify method is well-structured with proper validation and error handling. The time format validation (lines 149-154) is a good defensive practice.
However, Line 133 calls a private method
_save_config()on NotificationManager. While the comment acknowledges this, consider requesting a publicsave()method from NotificationManager to maintain proper encapsulation.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/cli.py(4 hunks)cortex/llm/interpreter.py(4 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- cortex/roles/security.yaml
- cortex/roles/default.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pycortex/cli.pycortex/roles/loader.py
🧬 Code graph analysis (2)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(16-31)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
CommandInterpreter(17-222)
🪛 GitHub Actions: CI
cortex/roles/loader.py
[error] 1-1: ruff check failed: Import block is un-sorted or un-formatted.
🪛 GitHub Actions: Cortex Automation
cortex/llm/interpreter.py
[error] 1-1: CommandInterpreter.init() got an unexpected keyword argument 'cache'. This indicates a mismatch between tests and the constructor API.
[error] 1-1: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt' when accessing _get_system_prompt. The class may not initialize system_prompt properly.
🪛 GitHub Check: Lint
cortex/roles/loader.py
[failure] 30-30: Ruff (UP015)
cortex/roles/loader.py:30:26: UP015 Unnecessary mode argument
[failure] 1-2: Ruff (I001)
cortex/roles/loader.py:1:1: I001 Import block is un-sorted or un-formatted
🔇 Additional comments (7)
cortex/roles/devops.yaml (1)
1-6: LGTM! Well-defined DevOps role.The YAML structure is clean and the system prompt appropriately emphasizes DevOps principles (reliability, automation, scalability, idempotency, IaC).
cortex/cli.py (3)
337-337: LGTM! Role integration is correct.The CommandInterpreter is now properly instantiated with the role parameter from the CLI instance.
812-816: LGTM! Proper PEP 8 formatting.The
--roleargument definition now has proper spacing around parameters, complying with PEP 8 style guidelines. The past review concern appears to have been addressed.
904-904: LGTM! Proper role wiring.The CLI is correctly instantiated with the parsed role argument, completing the role integration from command-line to CLI instance.
cortex/llm/interpreter.py (3)
24-49: Pipeline failure: Tests need updating for API changes.The constructor correctly implements role loading and system prompt initialization. However, pipeline failures indicate:
- Tests are passing a
cacheparameter that no longer exists in the constructor- AttributeError when accessing
system_prompt(though Line 37 should always initialize it)The code implementation looks correct, but tests need to be updated to match the new API.
Please update tests to:
- Remove the
cacheparameter fromCommandInterpreterinstantiation- Verify that
system_promptis properly accessible after initializationBased on pipeline failure logs.
71-91: LGTM! Well-structured system prompt composition.The system prompt correctly combines role-specific context with base command generation rules. The ordering (role prompt → base prompt) appropriately sets context before technical constraints. The base prompt includes good safety practices like avoiding destructive operations.
211-216: Acceptable formatting for method signature.The signature formatting with
selfon a separate line is unusual but not incorrect per PEP 8. Type hints are properly specified including the union type forsystem_infoand the return type.
66a0041 to
29dcf89
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/cli.py (1)
106-106: Add type hints to method signature.The
notifymethod is missing type hints for its parameter and return type.🔎 Proposed fix
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: """Handle notification commands"""As per coding guidelines, type hints are required in Python code.
cortex/llm/interpreter.py (1)
193-209: Add docstring to public method.The
parsemethod is missing a docstring explaining its parameters, return value, and behavior.🔎 Proposed fix
def parse(self, user_input: str, validate: bool = True) -> list[str]: + """Parse natural language input into executable shell commands. + + Args: + user_input: Natural language description of desired operation + validate: Whether to validate commands for dangerous patterns + + Returns: + List of executable shell commands + + Raises: + ValueError: If input is empty or provider is unsupported + RuntimeError: If API call fails + """ if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty")As per coding guidelines, docstrings are required for all public APIs.
♻️ Duplicate comments (2)
cortex/roles/loader.py (2)
6-7: Add docstring to the exception class.The
RoleNotFoundErrorexception class is missing a docstring explaining when it's raised.🔎 Proposed fix
class RoleNotFoundError(Exception): + """Raised when a role file cannot be found and no default fallback exists.""" passAs per coding guidelines, docstrings are required for all public APIs.
10-14: Add return type hint.The function signature is missing a return type annotation.
🔎 Proposed fix
-def get_roles_dir(): +def get_roles_dir() -> str: """ Returns the directory where built-in roles are stored. """ return os.path.dirname(__file__)As per coding guidelines, type hints are required in Python code.
🧹 Nitpick comments (2)
cortex/cli.py (1)
34-40: Fix misleading comment.The comment "# Lazy initialization" on line 38 is misleading. The role is stored immediately, not lazily initialized. The comment likely belongs with
self.prefs_manager = Noneon line 37.🔎 Proposed fix
- self.prefs_manager = None - self.role = role # Lazy initialization + self.prefs_manager = None # Lazy initialization + self.role = rolecortex/llm/interpreter.py (1)
211-216: Unconventional method signature formatting.The method signature has
selfon its own line (line 212), which is unconventional Python style. While not incorrect, the standard convention is to keepselfon the same line asdef.🔎 Proposed fix
- def parse_with_context( - self, - user_input: str, - system_info: dict[str, Any] | None = None, - validate: bool = True, - ) -> list[str]: + def parse_with_context( + self, user_input: str, + system_info: dict[str, Any] | None = None, + validate: bool = True, + ) -> list[str]: + """Parse natural language with additional system context. + + Args: + user_input: Natural language description of desired operation + system_info: Optional dictionary of system context to include + validate: Whether to validate commands for dangerous patterns + + Returns: + List of executable shell commands + """Also adds the missing docstring required by coding guidelines.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cortex/cli.py(4 hunks)cortex/llm/interpreter.py(4 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)examples/parallel_llm_demo.py(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- cortex/roles/default.yaml
- cortex/roles/devops.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
examples/parallel_llm_demo.pycortex/cli.pycortex/roles/loader.pycortex/llm/interpreter.py
🧬 Code graph analysis (3)
examples/parallel_llm_demo.py (1)
cortex/llm_router.py (1)
query_multiple_packages(725-769)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
CommandInterpreter(17-222)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 GitHub Actions: Cortex Automation
cortex/llm/interpreter.py
[error] 89-89: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'. Did you mean: '_get_system_prompt'?
🔇 Additional comments (13)
examples/parallel_llm_demo.py (3)
15-15: LGTM! PEP 8-compliant import grouping.The blank line correctly separates standard library imports from local application imports.
39-39: LGTM! Improved readability.The single-line function call is concise and maintains clarity.
213-215: LGTM! PEP 8-compliant line wrapping.The multi-line string formatting improves readability while maintaining the calculation logic.
cortex/roles/loader.py (2)
1-3: LGTM! Import formatting resolved.The imports are now properly grouped and separated: standard library (
os) followed by a blank line, then third-party (yaml). This resolves the previous ruff I001 violation.
17-32: LGTM! Well-implemented role loading with proper fallback.The function correctly:
- Uses type hints and includes a docstring
- Implements fallback to default role when requested role is missing
- Prevents infinite recursion by raising
RoleNotFoundErrorwhen default is missing- Uses
yaml.safe_loadfor security- Properly handles file I/O with context manager and UTF-8 encoding
cortex/roles/security.yaml (1)
1-7: LGTM! Well-defined security role.The YAML structure is valid and the security-focused system prompt appropriately emphasizes safety, least privilege, and compliance. The role integrates cleanly with the loader's fallback mechanism.
cortex/cli.py (5)
337-337: LGTM! Correct role integration.The role parameter is properly passed from CLI to the CommandInterpreter, enabling role-based system prompts throughout the installation flow.
769-769: LGTM! Help text addition.The notify command is properly documented in the help table with clear description.
812-816: LGTM! PEP 8 formatting resolved.The
--roleargument definition now follows PEP 8 style with proper spacing after commas. This resolves the previous formatting issue.
904-904: LGTM! Proper CLI integration.The role parameter is correctly passed from command-line arguments to the CortexCLI instance, completing the integration chain.
929-930: LGTM! Proper command dispatch.The notify command is correctly integrated into the main command dispatch logic.
cortex/llm/interpreter.py (2)
93-155: LGTM! Well-implemented provider-specific methods.The provider-specific call methods are properly implemented with:
- Consistent error handling across all providers
- Appropriate temperature and token limits
- Proper integration with system prompts
- Good error messages
71-91: Add missing docstrings to__init__and_get_system_promptmethods.The
self.system_promptattribute is properly initialized in__init__(line 35) viaself.system_prompt = self.role.get("system_prompt", ""), so there is no AttributeError risk. However, docstrings are required for all public modules, functions, classes, and methods. Both the__init__method and the_get_system_prompt()method lack docstrings and should include them per PEP 257, which requires public methods including the init constructor to have docstrings. The type hints are correctly in place.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
24-37: Update tests to remove 'cache' parameter from CommandInterpreter calls.Tests in
tests/test_interpreter.pyare passing acachekeyword argument toCommandInterpreter.__init__()that was removed from the method signature. This causes initialization to fail beforeself.system_promptis set, resulting in the pipeline errors:TypeError: unexpected keyword argument 'cache'andAttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'.Remove the
cache=mock_cacheargument from the following test methods (lines 148, 165, 182, and 236):
test_parse_with_validationtest_parse_without_validationtest_parse_with_contexttest_parse_docker_installation
♻️ Duplicate comments (2)
cortex/roles/loader.py (2)
6-7: Add docstring to exception class.The
RoleNotFoundErrorexception class lacks a docstring explaining when it's raised.🔎 Proposed fix
class RoleNotFoundError(Exception): + """Raised when a role file cannot be found and no default fallback exists.""" passAs per coding guidelines, docstrings are required for all public APIs.
10-14: Add return type hint.The function signature is missing a return type annotation.
🔎 Proposed fix
-def get_roles_dir(): +def get_roles_dir() -> str: """ Returns the directory where built-in roles are stored. """ return os.path.dirname(__file__)As per coding guidelines, type hints are required in Python code.
🧹 Nitpick comments (3)
cortex/cli.py (1)
106-172: Consider exposing a public save method in NotificationManager.The notify() method calls
mgr._save_config()at lines 133, 139, and 158, which is a private method. While this works, it would be better to expose a publicsave_config()method in NotificationManager for cleaner encapsulation.💡 Suggested approach
In
NotificationManager, add:def save_config(self): """Save the current configuration to disk.""" self._save_config()Then update calls in this file to use
mgr.save_config()instead ofmgr._save_config().cortex/roles/loader.py (1)
31-32: Add error handling for YAML parsing and validate role structure.The code doesn't handle potential YAML parsing errors or validate that the loaded role has the expected structure (e.g., contains a
system_promptkey).🔎 Proposed enhancement
with open(role_file, encoding="utf-8") as f: - return yaml.safe_load(f) + try: + role_data = yaml.safe_load(f) + if not isinstance(role_data, dict): + raise ValueError(f"Role file {role_name}.yaml must contain a dictionary") + return role_data + except yaml.YAMLError as e: + raise ValueError(f"Failed to parse role file {role_name}.yaml: {e}")cortex/llm/interpreter.py (1)
31-37: Consider error handling for role loading.If
load_role(role)raises an exception (e.g.,RoleNotFoundErroror YAML parsing error), theCommandInterpreterinitialization will fail. Consider wrapping the role loading in a try-except block to provide a more informative error message.💡 Suggested approach
+ try: self.role = load_role(role) self.system_prompt = self.role.get("system_prompt", "") + except Exception as e: + raise ValueError(f"Failed to load role '{role}': {e}")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
cortex/cli.py(4 hunks)cortex/llm/interpreter.py(4 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)examples/parallel_llm_demo.py(3 hunks)test_parallel_llm.py(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- cortex/roles/default.yaml
- cortex/roles/security.yaml
- examples/parallel_llm_demo.py
- cortex/roles/devops.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
test_parallel_llm.pycortex/roles/loader.pycortex/cli.pycortex/llm/interpreter.py
🧬 Code graph analysis (3)
test_parallel_llm.py (1)
cortex/llm_router.py (4)
query_multiple_packages(725-769)diagnose_errors_parallel(772-819)check_hardware_configs_parallel(822-867)acomplete(441-505)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
CommandInterpreter(17-222)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 GitHub Actions: Cortex Automation
cortex/llm/interpreter.py
[error] 1-1: Pytest failure: CommandInterpreter.init() got an unexpected keyword argument 'cache'. This suggests a mismatch between the tests and the CommandInterpreter constructor signature (likely API change).
[error] 89-89: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt' that is accessed in _get_system_prompt. This is likely a consequence of the constructor not initializing system_prompt due to the signature mismatch.
🔇 Additional comments (6)
test_parallel_llm.py (1)
51-51: LGTM! Clean formatting improvements.The changes remove unnecessary f-string prefixes where no interpolation occurs and consolidate multi-line function calls for better readability. These are purely cosmetic improvements with no functional impact.
Also applies to: 98-98, 112-112, 146-146, 173-173, 184-184, 195-195, 231-233, 243-243
cortex/cli.py (3)
34-34: LGTM! Clean role parameter addition.The role parameter is properly added with a sensible default value and stored as an instance variable for later use.
Also applies to: 38-38
337-337: LGTM! Role parameter properly propagated.The role is correctly threaded from the CLI argument through to the CommandInterpreter initialization, enabling the role-based system prompt functionality.
Also applies to: 812-816, 904-904
769-769: LGTM! Help text updated for new notify command.The help table correctly includes the new notify command.
cortex/llm/interpreter.py (2)
71-91: LGTM! Clean system prompt composition.The
_get_system_prompt()method correctly prepends the role-specific system prompt to the base prompt when available, enabling role-based behavior customization.
211-216: LGTM! Method signature properly formatted.The explicit
selfparameter formatting aligns with the codebase style.
29dcf89 to
295475c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/llm/interpreter.py (2)
24-37: Fix tests passingcacheparameter to CommandInterpreter constructor.The
__init__method does not accept acacheparameter, but tests on lines 148, 165, 182, and 236 pass it, causingTypeError: __init__() got an unexpected keyword argument 'cache'. Either add thecacheparameter to the constructor signature if caching support is intended, or update tests to use the post-construction pattern (like line 96:interpreter.cache = None).
71-91: Fix test to properly initialize attributes or avoid unsafe__new__usage.The test
test_system_prompt_formatusesCommandInterpreter.__new__(CommandInterpreter)which bypasses__init__, leavingself.system_promptuninitialized. When_get_system_prompt()accesses this attribute on line 89, it raisesAttributeError. Either use proper initialization in the test or assign attributes manually before calling the method.Additionally, add a docstring or comment to
_get_system_prompt()to document its purpose, consistent with_call_ollama()which has documentation.
♻️ Duplicate comments (2)
cortex/roles/loader.py (2)
6-7: Add docstring to the exception class.The
RoleNotFoundErrorexception class lacks a docstring. As per coding guidelines, docstrings are required for all public APIs.🔎 Proposed fix
class RoleNotFoundError(Exception): + """Raised when a role file cannot be found and no default fallback exists.""" pass
10-14: Add return type hint.The function is missing a return type annotation. As per coding guidelines, type hints are required in Python code.
🔎 Proposed fix
-def get_roles_dir(): +def get_roles_dir() -> str: """ Returns the directory where built-in roles are stored. """ return os.path.dirname(__file__)
🧹 Nitpick comments (1)
cortex/llm/interpreter.py (1)
1-8: Remove unused imports.After removing the caching functionality, these imports appear unused:
sqlite3(line 3)Optionalfrom typing (line 5) - code uses| Nonesyntax insteadSemanticCacheTYPE_CHECKING import (lines 7-8)🔎 Proposed fix
import json import os -import sqlite3 from enum import Enum -from typing import TYPE_CHECKING, Any, Optional +from typing import Any -if TYPE_CHECKING: - from cortex.semantic_cache import SemanticCache
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
cortex/cli.py(4 hunks)cortex/llm/interpreter.py(4 hunks)cortex/llm_router.py(3 hunks)cortex/roles/default.yaml(1 hunks)cortex/roles/devops.yaml(1 hunks)cortex/roles/loader.py(1 hunks)cortex/roles/security.yaml(1 hunks)examples/parallel_llm_demo.py(3 hunks)test_parallel_llm.py(9 hunks)tests/test_llm_router.py(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- cortex/llm_router.py
- test_parallel_llm.py
- tests/test_llm_router.py
🚧 Files skipped from review as they are similar to previous changes (3)
- cortex/roles/default.yaml
- cortex/roles/devops.yaml
- cortex/roles/security.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pyexamples/parallel_llm_demo.pycortex/cli.pycortex/roles/loader.py
🧬 Code graph analysis (3)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
examples/parallel_llm_demo.py (1)
cortex/llm_router.py (1)
query_multiple_packages(725-772)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
CommandInterpreter(17-222)
🪛 GitHub Actions: CI
cortex/llm/interpreter.py
[error] 1-1: CommandInterpreter.init() got an unexpected keyword argument 'cache' during pytest execution. Tests pass cache=mock_cache but implementation signature does not accept it.
[error] 1-1: TypeError: init() got an unexpected keyword argument 'cache' (raised while running tests/test_interpreter.py::TestCommandInterpreter::test_parse_docker_installation).
[error] 1-1: TypeError: init() got an unexpected keyword argument 'cache' (raised while running tests/test_interpreter.py::TestCommandInterpreter::test_parse_with_context).
[error] 1-1: TypeError: init() got an unexpected keyword argument 'cache' (raised while running tests/test_interpreter.py::TestCommandInterpreter::test_parse_with_validation).
[error] 1-1: TypeError: init() got an unexpected keyword argument 'cache' (raised while running tests/test_interpreter.py::TestCommandInterpreter::test_parse_without_validation).
[error] 89-89: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'. Did you mean: '_get_system_prompt'?
🪛 GitHub Actions: Cortex Automation
cortex/llm/interpreter.py
[error] 89-89: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'. Did you mean: '_get_system_prompt'?
🔇 Additional comments (7)
examples/parallel_llm_demo.py (1)
15-15: LGTM!The formatting improvements (blank line after imports, condensed function call, and multi-line print statement) enhance readability without changing behavior.
Also applies to: 39-39, 213-215
cortex/cli.py (5)
34-40: LGTM!The role parameter is properly added with type hint and default value. The constructor correctly stores the role for later use with the
CommandInterpreter.
106-172: LGTM!The
notifymethod has proper error handling, input validation (including time format for DND), and consistent return codes. The acknowledgment of using the private_save_configmethod in comments is appreciated.
337-337: LGTM!The
CommandInterpreterinstantiation correctly passes the newroleparameter, enabling role-based system prompts.
811-816: LGTM!The
--roleflag is properly defined with correct PEP 8 formatting, appropriate default value, and helpful description with examples.
904-904: LGTM!The
CortexCLIinstantiation correctly passes theroleargument from command-line parsing, completing the role integration through the CLI layer.cortex/llm/interpreter.py (1)
211-216: LGTM!The method signature formatting is clean and follows PEP 8 style guidelines with proper type annotations.
86f4d9e to
4573c96
Compare
hi @mikejmorgan-ai , |
af5f6c3 to
d299c43
Compare
26fe7fd to
54b676e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
cortex/config_manager.py (1)
283-291: Fix TOCTOU race condition: move existence check inside the lock.The file existence check at line 283 occurs outside the lock, creating a race condition window where another thread could delete, create, or modify the file between the check and the actual read operation. This can lead to:
- Missing newly created preferences (check returns False, another thread creates file, returns empty dict)
- Reading partial/corrupted data (check passes, another thread modifies file during read)
- Relying on exception handling for normal concurrency scenarios (inefficient and unclear intent)
🔎 Proposed fix to move existence check inside the lock
Option 1: Move existence check inside lock
def _load_preferences(self) -> dict[str, Any]: """ Load user preferences from ~/.cortex/preferences.yaml. Returns: Dictionary of preferences """ - if self.preferences_file.exists(): - try: - with self._file_lock: + try: + with self._file_lock: + if self.preferences_file.exists(): with open(self.preferences_file) as f: return yaml.safe_load(f) or {} - except Exception: - pass + except Exception: + pass return {}Option 2: Eliminate existence check (simpler and more Pythonic)
def _load_preferences(self) -> dict[str, Any]: """ Load user preferences from ~/.cortex/preferences.yaml. Returns: Dictionary of preferences """ - if self.preferences_file.exists(): - try: - with self._file_lock: - with open(self.preferences_file) as f: - return yaml.safe_load(f) or {} - except Exception: - pass + try: + with self._file_lock: + with open(self.preferences_file) as f: + return yaml.safe_load(f) or {} + except FileNotFoundError: + return {} + except Exception: + pass return {}Option 2 is preferred as it follows the EAFP (Easier to Ask for Forgiveness than Permission) principle and is more Pythonic.
cortex/demo.py (1)
27-34: Same None-check issue in GPU recommendations.Line 28 has the same potential
TypeErroras line 21. Apply the same fix here.🔎 Proposed fix
print("\n🤖 Model Recommendations:") - if gpu and len(gpu) > 0: + if gpu is not None and len(gpu) > 0: print("• LLaMA-3-8B → Optimized for your GPU") print("• Mistral-7B → High performance inference") else:cortex/hardware_detection.py (1)
645-653:get_detector()singleton is not thread-safe.While the instance-level cache operations were made thread-safe, the module-level
get_detector()singleton still uses the unprotected pattern. This is inconsistent with the thread-safe patterns applied toget_history()intransaction_history.py.🔎 Proposed fix
# Convenience functions _detector_instance = None +_detector_lock = threading.Lock() def get_detector() -> HardwareDetector: - """Get the global hardware detector instance.""" + """Get the global hardware detector instance (thread-safe).""" global _detector_instance if _detector_instance is None: - _detector_instance = HardwareDetector() + with _detector_lock: + if _detector_instance is None: + _detector_instance = HardwareDetector() return _detector_instancecortex/installation_history.py (2)
284-312: Critical:conn.commit()called outside connection context.Line 306 calls
conn.commit()after thewithblock has exited, meaning the connection has already been returned to the pool. This will likely cause errors or data corruption.🔎 Proposed fix
try: with self._pool.get_connection() as conn: cursor = conn.cursor() cursor.execute( """ INSERT INTO installations VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) """, ( install_id, timestamp, operation_type.value, json.dumps(packages), InstallationStatus.IN_PROGRESS.value, json.dumps([asdict(s) for s in before_snapshot]), None, # after_snapshot - will be updated json.dumps(commands), None, # error_message 1, # rollback_available None, # duration ), ) - conn.commit() + conn.commit() logger.info(f"Installation {install_id} recorded") return install_id
314-363: Critical: Database operations outside connection context.Lines 333-358 perform database operations (
cursor.execute,conn.commit()) after exiting thewithblock at line 332. The connection has been returned to the pool and may be in use by another thread.🔎 Proposed fix
def update_installation( self, install_id: str, status: InstallationStatus, error_message: str | None = None ): """Update installation record after completion""" try: with self._pool.get_connection() as conn: cursor = conn.cursor() # Get packages from record cursor.execute( "SELECT packages, timestamp FROM installations WHERE id = ?", (install_id,) ) result = cursor.fetchone() if not result: logger.error(f"Installation {install_id} not found") return packages = json.loads(result[0]) - start_time = datetime.datetime.fromisoformat(result[1]) - duration = (datetime.datetime.now() - start_time).total_seconds() - - # Create after snapshot - after_snapshot = self._create_snapshot(packages) - - # Update record - cursor.execute( - """ - UPDATE installations - SET status = ?, - after_snapshot = ?, - error_message = ?, - duration_seconds = ? - WHERE id = ? - """, - ( - status.value, - json.dumps([asdict(s) for s in after_snapshot]), - error_message, - duration, - install_id, - ), - ) - - conn.commit() + start_time = datetime.datetime.fromisoformat(result[1]) + duration = (datetime.datetime.now() - start_time).total_seconds() + + # Create after snapshot + after_snapshot = self._create_snapshot(packages) + + # Update record + cursor.execute( + """ + UPDATE installations + SET status = ?, + after_snapshot = ?, + error_message = ?, + duration_seconds = ? + WHERE id = ? + """, + ( + status.value, + json.dumps([asdict(s) for s in after_snapshot]), + error_message, + duration, + install_id, + ), + ) + + conn.commit() logger.info(f"Installation {install_id} updated: {status.value}") except Exception as e: logger.error(f"Failed to update installation: {e}") raisecortex/llm/interpreter.py (1)
111-155: Critical: Unresolved merge conflict markers will cause syntax errors.The code contains Git merge conflict markers (
<<<<<<< HEAD,=======,>>>>>>>) that were not resolved. This will cause aSyntaxErrorat runtime and prevent the module from being imported.🔎 Proposed fix: Remove conflict markers and keep the intended version
-<<<<<<< HEAD - def _get_system_prompt(self) -> str: - base_prompt = """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands. -======= - def _get_system_prompt(self, simplified: bool = False) -> str: - """Get system prompt for command interpretation. - - Args: - simplified: If True, return a shorter prompt optimized for local models - """ - if simplified: - return """You must respond with ONLY a JSON object. No explanations, no markdown, no code blocks. - -Format: {"commands": ["command1", "command2"]} - -Example input: install nginx -Example output: {"commands": ["sudo apt update", "sudo apt install -y nginx"]} - -Rules: -- Use apt for Ubuntu packages -- Add sudo for system commands -- Return ONLY the JSON object""" - - return """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands. ->>>>>>> 344b109 (Add Ollama integration with setup script, LLM router support, and comprehensive documentation) + def _get_system_prompt(self, simplified: bool = False) -> str: + """Get system prompt for command interpretation. + + Args: + simplified: If True, return a shorter prompt optimized for local models + """ + if simplified: + return """You must respond with ONLY a JSON object. No explanations, no markdown, no code blocks. + +Format: {"commands": ["command1", "command2"]} + +Example input: install nginx +Example output: {"commands": ["sudo apt update", "sudo apt install -y nginx"]} + +Rules: +- Use apt for Ubuntu packages +- Add sudo for system commands +- Return ONLY the JSON object""" + + base_prompt = """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands.docs/LLM_INTEGRATION.md (1)
105-112: Update legacy import path in documentation.The legacy example shows
from LLM import CommandInterpreterbut there is no top-levelLLMmodule. The correct import isfrom cortex.llm.interpreter import CommandInterpreterorfrom cortex.llm import CommandInterpreter. This incorrect import would fail when users attempt to follow the example.
🧹 Nitpick comments (25)
test/integration/docker_utils.py (1)
33-46: Consider applying consistent encoding parameters.For consistency with
run_in_docker(), consider adding explicitencoding="utf-8"anderrors="replace"parameters to the subprocess calls in this function. Whiledocker --versionanddocker infoare unlikely to produce encoding issues, uniform handling across all subprocess calls improves maintainability.🔎 Proposed refactor for consistency
try: subprocess.run( [docker_path, "--version"], check=True, capture_output=True, text=True, + encoding="utf-8", + errors="replace", timeout=5, ) subprocess.run( [docker_path, "info"], check=True, capture_output=True, text=True, + encoding="utf-8", + errors="replace", timeout=5, ) return Truetest_parallel_llm.py (1)
231-233: Consider simplifying for readability.The dict comprehension
{k: v for k, v in req.items() if k != "task_type"}makes this line harder to parse. For a test file simulating sequential execution, explicit parameter unpacking would be clearer:await router.acomplete( messages=req["messages"], task_type=req["task_type"], max_tokens=req["max_tokens"] )This makes the intent immediately obvious and reduces cognitive overhead.
.github/workflows/automation.yml (1)
73-76: Type checking with continue-on-error should be temporary.While
continue-on-error: trueis useful for gradually introducing type checking, it means type errors won't fail CI. Consider creating a follow-up issue to address type errors and remove this flag.Do you want me to open a new issue to track the removal of
continue-on-erroronce type hints are fully implemented?docs/PYTHON_314_DEVELOPER_CHECKLIST.md (1)
10-15: Line number references may become stale.The document references specific line numbers in source files (e.g.,
transaction_history.pylines 656-672). These will become outdated as code evolves. Consider using function/class names as anchors or removing specific line numbers in favor of descriptive references.cortex/dependency_resolver.py (1)
101-104: Missing type hints for return value.As per coding guidelines, type hints are required. The method signature should include the return type.
🔎 Suggested fix
- def is_package_installed(self, package_name: str) -> bool: + def is_package_installed(self, package_name: str) -> bool:The current signature already has the return type hint, so this is actually correct. The docstring update to note thread-safety is a good addition.
docs/PYTHON_314_THREAD_SAFETY_AUDIT.md (2)
661-671: Add language specifier to code blocks for syntax highlighting.Several code blocks in the document are missing language specifiers. Adding them improves readability and enables syntax highlighting.
🔎 Suggested fix for ASCII diagrams
-``` +```text User Request ↓ [LLMRouter] (sync) → [Claude/Kimi API]
424-426: Format bare URLs as markdown links.Convert bare URLs to proper markdown link format for better rendering and accessibility.
🔎 Suggested fix
-- SQLite WAL Mode: https://www.sqlite.org/wal.html -- Python Threading: https://docs.python.org/3/library/threading.html -- Double-Checked Locking: https://en.wikipedia.org/wiki/Double-checked_locking +- [SQLite WAL Mode](https://www.sqlite.org/wal.html) +- [Python Threading](https://docs.python.org/3/library/threading.html) +- [Double-Checked Locking](https://en.wikipedia.org/wiki/Double-checked_locking)cortex/transaction_history.py (1)
25-27: Consider moving thethreadingimport to the top with other standard library imports.The
threadingimport is placed after theloggerinitialization. While this works, PEP 8 recommends grouping all imports at the top of the file. Consider moving it to line 10-20 with the other standard library imports.🔎 Suggested fix
Move the import to the standard library import section:
import hashlib import json import logging import os import sqlite3 import subprocess +import threading from dataclasses import asdict, dataclass, field from datetime import datetime from enum import Enum from pathlib import Path from typing import Any logger = logging.getLogger(__name__) -import threading # For thread-safe singleton pattern - - class TransactionType(Enum):docs/PYTHON_314_COMPLETE_IMPLEMENTATION.md (1)
424-426: Format references as proper markdown links.The bare URLs in the References section should be formatted as markdown links for better rendering.
🔎 Suggested fix
## References -- PEP 703: Making the Global Interpreter Lock Optional -- SQLite WAL Mode: https://www.sqlite.org/wal.html -- Python Threading: https://docs.python.org/3/library/threading.html -- Double-Checked Locking: https://en.wikipedia.org/wiki/Double-checked_locking +- [PEP 703: Making the Global Interpreter Lock Optional](https://peps.python.org/pep-0703/) +- [SQLite WAL Mode](https://www.sqlite.org/wal.html) +- [Python Threading](https://docs.python.org/3/library/threading.html) +- [Double-Checked Locking](https://en.wikipedia.org/wiki/Double-checked_locking)cortex/kernel_features/kv_cache_manager.py (1)
125-128: Consider encapsulating database deletion in CacheDatabase.The
destroy_poolmethod directly accessesself.db._pool, which breaks encapsulation. While functional, consider adding adelete_pool(name)method toCacheDatabaseto keep database operations within that class.🔎 Optional refactor for better encapsulation
Add a method to
CacheDatabaseclass:def delete_pool(self, name: str): with self._pool.get_connection() as conn: conn.execute("DELETE FROM pools WHERE name=?", (name,))Then update
KVCacheManager.destroy_pool:def destroy_pool(self, name: str) -> bool: if name in self.pools: self.pools[name].destroy() del self.pools[name] self.db.delete_pool(name) print(f"✅ Destroyed pool '{name}'") return Truedocs/OLLAMA_SETUP.md (1)
295-298: Convert bare URLs to proper markdown links.Static analysis flagged these bare URLs. Convert them to markdown link format for consistency and proper rendering.
🔎 Proposed fix
-## Resources - -- **Ollama Website**: https://ollama.ai -- **Model Library**: https://ollama.ai/library -- **GitHub**: https://github.com/ollama/ollama -- **Discord**: https://discord.gg/ollama +## Resources + +- **Ollama Website**: <https://ollama.ai> +- **Model Library**: <https://ollama.ai/library> +- **GitHub**: <https://github.com/ollama/ollama> +- **Discord**: <https://discord.gg/ollama>cortex/utils/db_pool.py (1)
128-136: Consider moving import outside the finally block.The
import logginginside the finally block adds minor overhead on every pool overflow (though this should rarely happen). Consider importing at module level for consistency with other imports.🔎 Proposed fix
import queue import sqlite3 import threading +import logging from collections.abc import Iterator from contextlib import contextmanager from pathlib import PathThen update line 134-136:
except queue.Full: # Should never happen, but log if it does - import logging - logging.error(f"Connection pool overflow for {self.db_path}")tests/test_ask.py (1)
123-149: Consider cleaning up environment variables in tearDown.Tests set
CORTEX_FAKE_RESPONSEbut don't consistently clean it up. This could cause test pollution if tests run in a different order.🔎 Proposed fix
def setUp(self): """Set up test fixtures.""" self.temp_dir = tempfile.mkdtemp() self.db_path = os.path.join(self.temp_dir, "test_cache.db") self._caches_to_close = [] + self._original_env = os.environ.get("CORTEX_FAKE_RESPONSE") def tearDown(self): """Clean up temporary files.""" import shutil + # Restore environment + if self._original_env is not None: + os.environ["CORTEX_FAKE_RESPONSE"] = self._original_env + elif "CORTEX_FAKE_RESPONSE" in os.environ: + del os.environ["CORTEX_FAKE_RESPONSE"] + # Close any caches to release file handles (needed on Windows) for cache in self._caches_to_close:cortex/llm/interpreter.py (2)
57-79: Consider logging config read failures for debugging.The silent exception swallowing on line 75-76 is appropriate for robustness, but logging the exception at DEBUG level would help users troubleshoot configuration issues.
🔎 Optional: Add debug logging
except Exception: - pass # Ignore errors reading config + import logging + logging.getLogger(__name__).debug("Failed to read Ollama model from config", exc_info=True)
290-295: Replace stderr print with structured logging.Direct printing to stderr bypasses logging configuration and may clutter output unexpectedly. Using the logging module would provide better control.
🔎 Proposed fix
except (json.JSONDecodeError, ValueError) as e: - # Log the problematic content for debugging - import sys - - print(f"\nDebug: Failed to parse JSON. Raw content:\n{content[:500]}", file=sys.stderr) + import logging + logging.getLogger(__name__).debug( + "Failed to parse JSON. Raw content: %s", content[:500] + ) raise ValueError(f"Failed to parse LLM response: {str(e)}")cortex/llm_router.py (1)
527-532: Consider adding lock to reset_stats for consistency.While
_update_statsandget_statsare thread-safe,reset_statsmodifies the same shared state without acquiring the lock. This could cause inconsistent reads if called concurrently with other stat operations.🔎 Optional: Add lock to reset_stats
def reset_stats(self): """Reset all usage statistics.""" - self.total_cost_usd = 0.0 - self.request_count = 0 - for provider in self.provider_stats: - self.provider_stats[provider] = {"requests": 0, "tokens": 0, "cost": 0.0} + with self._stats_lock: + self.total_cost_usd = 0.0 + self.request_count = 0 + for provider in self.provider_stats: + self.provider_stats[provider] = {"requests": 0, "tokens": 0, "cost": 0.0}tests/test_thread_safety.py (2)
26-41: Thread-unsafe list mutation pattern may cause issues in free-threading mode.The
resultslist is mutated from multiple threads without synchronization. While this works under the GIL, in Python 3.14 free-threading (the stated target), concurrentlist.append()operations could cause data corruption. Since this test specifically targets free-threading compatibility, consider using a thread-safe collection.🔎 Proposed fix using thread-safe queue
def test_singleton_thread_safety_transaction_history(): """Test that transaction history singleton is thread-safe.""" from cortex.transaction_history import get_history + import queue - results = [] + results = queue.Queue() def get_instance(): history = get_history() - results.append(id(history)) + results.put(id(history)) # Hammer singleton initialization from 100 threads with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor: futures = [executor.submit(get_instance) for _ in range(1000)] concurrent.futures.wait(futures) # All threads should get the SAME instance - unique_instances = len(set(results)) + unique_instances = len(set(results.queue)) assert ( unique_instances == 1 ), f"Multiple singleton instances created! Found {unique_instances} different instances"
221-246: Accessing private_poolattribute is fragile.The test directly accesses
pool._poolto exhaust the connection pool. This couples the test to implementation details and could break if the pool implementation changes.Consider adding a public method like
pool.acquire()for testing, or document why private access is necessary here.docs/OLLAMA_INTEGRATION_SUMMARY.md (2)
355-357: Convert bare URLs to markdown links.Per markdownlint, bare URLs should use markdown link format for consistency.
🔎 Proposed fix
-**Discord:** https://discord.gg/uCqHvxjU83 -**Issues:** https://github.com/cortexlinux/cortex/issues -**Email:** mike@cortexlinux.com +**Discord:** [discord.gg/uCqHvxjU83](https://discord.gg/uCqHvxjU83) +**Issues:** [github.com/cortexlinux/cortex/issues](https://github.com/cortexlinux/cortex/issues) +**Email:** [mike@cortexlinux.com](mailto:mike@cortexlinux.com)
173-200: Add language specifier to ASCII diagram code block.The architecture diagram code block should specify a language (e.g.,
textorplaintext) to satisfy markdownlint MD040.🔎 Proposed fix
-``` +```text ┌─────────────────────────────────────────────┐docs/PARALLEL_LLM_FREE_THREADING_DESIGN.md (1)
378-378: UseCallabletype hint instead ofcallable.In the code example,
callableis a built-in function, not a type. For type hints, useCallablefromtypingorcollections.abc.- progress_callback: callable | None = None, + progress_callback: Callable[[int, int], None] | None = None,cortex/network_config.py (3)
185-192: Specify encoding when opening files.File operations should explicitly specify encoding for consistency and to avoid platform-dependent behavior.
🔎 Proposed fix
try: - with open("/etc/environment") as f: + with open("/etc/environment", encoding="utf-8") as f: for line in f: if "http_proxy=" in line.lower(): proxy_url = line.split("=")[1].strip().strip("\"'") return {"http": proxy_url, "https": proxy_url} except (FileNotFoundError, PermissionError) as e: logger.debug(f"Cannot read /etc/environment: {e}")
200-212: Specify encoding when opening apt config files.Same issue as above - explicitly specify encoding.
🔎 Proposed fix
for conf_path in apt_conf_paths: try: - with open(conf_path) as f: + with open(conf_path, encoding="utf-8") as f: content = f.read()
277-289: Consider making connectivity test URLs configurable.The hard-coded URLs (8.8.8.8, 1.1.1.1, api.github.com) may be blocked in restricted networks. Consider making these configurable or documenting the fallback behavior.
cortex/env_manager.py (1)
529-534: Overly broad exception handling may hide specific errors.Catching generic
Exceptionmakes it difficult to diagnose issues like corrupted key files vs. invalid encrypted data. Consider catching specific exceptions from the cryptography library.🔎 Proposed fix
try: fernet = self._get_fernet() decrypted = fernet.decrypt(encrypted_value.encode("utf-8")) return decrypted.decode("utf-8") - except Exception as e: + except (TypeError, ValueError) as e: + raise ValueError(f"Decryption failed: {e}") from e + except Exception as e: + # Catch cryptography-specific exceptions raise ValueError(f"Decryption failed: {e}") from eNote: The cryptography library raises
cryptography.fernet.InvalidTokenfor invalid/corrupted data, which could be caught specifically for better error messages.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (85)
.env.example.github/CODEOWNERS.github/ISSUE_TEMPLATE/feature_request.md.github/pull_request_template.md.github/workflows/automation.ymlOLLAMA_QUICKSTART.mdREADME.mdTESTING.mdcortex/ask.pycortex/cli.pycortex/config_manager.pycortex/confirmation.pycortex/context_memory.pycortex/coordinator.pycortex/demo.pycortex/dependency_check.pycortex/dependency_resolver.pycortex/doctor.pycortex/env_loader.pycortex/env_manager.pycortex/graceful_degradation.pycortex/hardware_detection.pycortex/installation_history.pycortex/installation_verifier.pycortex/kernel_features/accelerator_limits.pycortex/kernel_features/kv_cache_manager.pycortex/kernel_features/model_lifecycle.pycortex/llm/interpreter.pycortex/llm_router.pycortex/logging_system.pycortex/network_config.pycortex/notification_manager.pycortex/planner.pycortex/progress_indicators.pycortex/progress_tracker.pycortex/roles/default.yamlcortex/roles/devops.yamlcortex/roles/loader.pycortex/roles/security.yamlcortex/semantic_cache.pycortex/stack_manager.pycortex/transaction_history.pycortex/user_preferences.pycortex/utils/db_pool.pydocs/COMMANDS.mddocs/ENV_MANAGEMENT.mddocs/LLM_INTEGRATION.mddocs/NETWORK_CONFIG.mddocs/NETWORK_CONFIG_TESTS.mddocs/OLLAMA_FIX.mddocs/OLLAMA_INTEGRATION_SUMMARY.mddocs/OLLAMA_SETUP.mddocs/PARALLEL_LLM_FREE_THREADING_DESIGN.mddocs/POST_MVP_AUDIT.mddocs/PYTHON_314_ANALYSIS_SUMMARY.mddocs/PYTHON_314_COMPLETE_IMPLEMENTATION.mddocs/PYTHON_314_DEVELOPER_CHECKLIST.mddocs/PYTHON_314_THREAD_SAFETY_AUDIT.mddocs/TROUBLESHOOTING.mddocs/USER_PREFERENCES_IMPLEMENTATION.mdexamples/env_demo.pyexamples/parallel_llm_demo.pyexamples/progress_demo.pyexamples/sample-config.yamlexamples/standalone_demo.pyrequirements.txtscripts/setup_ollama.pysrc/intent/llm_agent.pysrc/intent/planner.pytest/integration/docker_utils.pytest/integration/test_end_to_end.pytest_parallel_llm.pytests/test_ask.pytests/test_cli.pytests/test_coordinator.pytests/test_doctor.pytests/test_env_manager.pytests/test_interpreter.pytests/test_llm_router.pytests/test_logging_system.pytests/test_network_config.pytests/test_ollama_integration.pytests/test_thread_safety.pytests/test_user_preferences.pytests/unit/test_progress_tracker.py
💤 Files with no reviewable changes (10)
- TESTING.md
- tests/test_coordinator.py
- tests/test_interpreter.py
- cortex/user_preferences.py
- docs/POST_MVP_AUDIT.md
- tests/test_cli.py
- docs/USER_PREFERENCES_IMPLEMENTATION.md
- src/intent/llm_agent.py
- tests/test_user_preferences.py
- src/intent/planner.py
✅ Files skipped from review due to trivial changes (9)
- docs/COMMANDS.md
- OLLAMA_QUICKSTART.md
- .env.example
- examples/standalone_demo.py
- examples/progress_demo.py
- cortex/logging_system.py
- docs/ENV_MANAGEMENT.md
- cortex/kernel_features/model_lifecycle.py
- tests/test_logging_system.py
🚧 Files skipped from review as they are similar to previous changes (6)
- cortex/roles/devops.yaml
- cortex/roles/security.yaml
- cortex/roles/default.yaml
- examples/parallel_llm_demo.py
- cortex/roles/loader.py
- tests/test_llm_router.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/installation_verifier.pytest/integration/docker_utils.pycortex/planner.pytests/unit/test_progress_tracker.pytests/test_env_manager.pycortex/installation_history.pycortex/dependency_check.pytests/test_ollama_integration.pytest_parallel_llm.pycortex/semantic_cache.pycortex/confirmation.pycortex/context_memory.pycortex/config_manager.pycortex/kernel_features/accelerator_limits.pycortex/dependency_resolver.pyexamples/env_demo.pycortex/env_loader.pycortex/coordinator.pycortex/demo.pycortex/notification_manager.pytests/test_thread_safety.pytests/test_network_config.pycortex/utils/db_pool.pytest/integration/test_end_to_end.pycortex/kernel_features/kv_cache_manager.pycortex/graceful_degradation.pyscripts/setup_ollama.pycortex/progress_tracker.pycortex/network_config.pycortex/stack_manager.pytests/test_ask.pycortex/ask.pycortex/doctor.pytests/test_doctor.pycortex/env_manager.pycortex/hardware_detection.pycortex/transaction_history.pycortex/llm_router.pycortex/cli.pycortex/progress_indicators.pycortex/llm/interpreter.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
cortex/installation_verifier.pycortex/installation_history.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/unit/test_progress_tracker.pytests/test_env_manager.pytests/test_ollama_integration.pytests/test_thread_safety.pytests/test_network_config.pytests/test_ask.pytests/test_doctor.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Pull request title format: [component] Description
Applied to files:
.github/pull_request_template.md
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/installation_history.py
🧬 Code graph analysis (26)
tests/unit/test_progress_tracker.py (1)
cortex/progress_tracker.py (1)
add_stage(162-175)
tests/test_env_manager.py (1)
cortex/env_manager.py (36)
EncryptionManager(454-542)EnvironmentManager(665-1105)EnvironmentStorage(545-662)EnvironmentTemplate(102-125)EnvironmentValidator(372-451)EnvironmentVariable(58-86)TemplateVariable(90-98)ValidationResult(129-134)VariableType(46-54)get_env_manager(1108-1110)to_dict(67-75)to_dict(109-115)from_dict(78-86)from_dict(118-125)validate(392-451)encrypt(502-514)decrypt(516-534)is_key_available(536-542)save(597-631)load(571-595)delete_app(633-648)list_apps(650-662)list_apps(828-835)set_variable(689-738)get_variable(740-767)get_variable_info(769-781)list_variables(783-794)delete_variable(796-814)clear_app(816-826)export_env(837-870)import_env(872-932)load_to_environ(934-956)list_templates(960-967)get_template(969-979)apply_template(981-1061)validate_app(1063-1105)
cortex/installation_history.py (1)
cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
tests/test_ollama_integration.py (4)
cortex/llm_router.py (5)
LLMProvider(45-50)TaskType(32-42)complete(261-327)route_task(206-259)get_stats(497-525)scripts/setup_ollama.py (2)
check_ollama_installed(75-77)check_ollama_running(80-91)cortex/semantic_cache.py (2)
stats(365-377)total(32-34)cortex/sandbox/sandbox_executor.py (1)
success(74-76)
test_parallel_llm.py (1)
cortex/llm_router.py (4)
query_multiple_packages(880-927)diagnose_errors_parallel(930-961)check_hardware_configs_parallel(987-1035)acomplete(543-609)
cortex/semantic_cache.py (1)
cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
cortex/confirmation.py (1)
tests/test_progress_indicators.py (1)
steps(315-320)
cortex/context_memory.py (1)
cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
cortex/kernel_features/accelerator_limits.py (1)
cortex/utils/db_pool.py (2)
get_connection_pool(181-222)get_connection(99-136)
cortex/dependency_resolver.py (1)
cortex/logging_system.py (1)
info(200-202)
examples/env_demo.py (1)
cortex/env_manager.py (18)
EncryptionManager(454-542)EnvironmentManager(665-1105)EnvironmentStorage(545-662)set_variable(689-738)get_variable(740-767)list_variables(783-794)delete_variable(796-814)encrypt(502-514)get_variable_info(769-781)decrypt(516-534)list_templates(960-967)get_template(969-979)apply_template(981-1061)export_env(837-870)import_env(872-932)list_apps(650-662)list_apps(828-835)load_to_environ(934-956)
cortex/coordinator.py (1)
cortex/logging_system.py (1)
_log(216-219)
cortex/demo.py (1)
cortex/hardware_detection.py (1)
total_gb(88-89)
tests/test_network_config.py (1)
cortex/network_config.py (21)
NetworkConfig(26-538)add_proxy_auth(586-613)check_proxy_auth(544-570)prompt_proxy_credentials(573-583)_detect_env_proxy(129-136)_detect_gnome_proxy(138-180)_detect_system_proxy(182-214)detect_proxy(97-127)_parse_proxy_url(216-218)detect_vpn(222-253)check_connectivity(257-289)detect_network_quality(291-307)configure_apt_proxy(311-347)configure_pip_proxy(349-362)get_httpx_proxy_config(364-384)cleanup_apt_proxy(460-477)cache_package_list(388-408)get_cached_packages(410-443)enable_offline_fallback(445-458)auto_configure(479-516)print_summary(520-538)
cortex/utils/db_pool.py (2)
cortex/graceful_degradation.py (2)
put(143-160)get(109-141)cortex/kernel_features/accelerator_limits.py (1)
get(69-72)
test/integration/test_end_to_end.py (2)
cortex/cli.py (1)
env(707-753)test/integration/docker_utils.py (1)
run_in_docker(52-108)
cortex/kernel_features/kv_cache_manager.py (1)
cortex/utils/db_pool.py (2)
get_connection_pool(181-222)get_connection(99-136)
scripts/setup_ollama.py (1)
tests/test_ollama_integration.py (3)
check_ollama_installed(33-43)check_ollama_running(46-69)main(175-228)
cortex/network_config.py (1)
cortex/logging_system.py (2)
debug(196-198)warning(204-206)
tests/test_ask.py (2)
cortex/ask.py (4)
ask(282-347)AskHandler(135-347)SystemInfoGatherer(16-132)gather_context(125-132)cortex/semantic_cache.py (3)
SemanticCache(44-377)stats(365-377)put_commands(292-343)
cortex/ask.py (2)
cortex/semantic_cache.py (2)
get_commands(203-290)put_commands(292-343)cortex/cli.py (1)
ask(284-314)
cortex/doctor.py (2)
cortex/cli.py (1)
status(687-695)cortex/branding.py (1)
cx_header(82-88)
tests/test_doctor.py (1)
cortex/doctor.py (1)
_check_security_tools(339-349)
cortex/env_manager.py (1)
tests/test_env_manager.py (1)
storage(51-53)
cortex/hardware_detection.py (1)
cortex/kernel_features/hardware_detect.py (2)
to_dict(50-53)to_dict(70-75)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 LanguageTool
docs/PYTHON_314_ANALYSIS_SUMMARY.md
[style] ~3-~3: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...lysis - Summary Date: December 22, 2025 Analysis Scope: Full cortex/ dire...
(MISSING_COMMA_AFTER_YEAR)
[style] ~554-~554: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...Version**: 1.0 Date: December 22, 2025 Next Review: After Phase 1 comple...
(MISSING_COMMA_AFTER_YEAR)
docs/PYTHON_314_THREAD_SAFETY_AUDIT.md
[style] ~987-~987: Consider using a different verb for a more formal wording.
Context: ... -progress_tracker.py`: Review and fix similar issues - [ ] 2.3: Document asyn...
(FIX_RESOLVE)
[style] ~1140-~1140: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...*: 1.0 Last Updated: December 22, 2025 Author: GitHub Copilot (Claude So...
(MISSING_COMMA_AFTER_YEAR)
docs/OLLAMA_INTEGRATION_SUMMARY.md
[style] ~7-~7: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...thout API keys. Date: December 26, 2025 Status: ✅ Complete **Related Is...
(MISSING_COMMA_AFTER_YEAR)
[style] ~27-~27: This phrase is redundant (‘I’ stands for ‘interface’). Use simply “API”.
Context: ... Key Features: - OpenAI-compatible API interface - Automatic GPU detection (when availab...
(ACRONYM_TAUTOLOGY)
docs/PARALLEL_LLM_FREE_THREADING_DESIGN.md
[style] ~1056-~1056: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...*: 1.0 Last Updated: December 22, 2025 Author: GitHub Copilot (Claude So...
(MISSING_COMMA_AFTER_YEAR)
docs/PYTHON_314_DEVELOPER_CHECKLIST.md
[style] ~477-~477: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...te --- Last Updated: December 22, 2025 Status: ✅ Ready for Use
(MISSING_COMMA_AFTER_YEAR)
docs/PYTHON_314_COMPLETE_IMPLEMENTATION.md
[style] ~3-~3: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...tion - Complete Date: December 22, 2025 Status: ✅ Production Ready **Ba...
(MISSING_COMMA_AFTER_YEAR)
[uncategorized] ~4-~4: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...te:** December 22, 2025 Status: ✅ Production Ready Backward Compatible: Yes (Python ...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[uncategorized] ~290-~290: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...nally 2. Existing code works - 100% backward compatible APIs 3. *Connection pooling automatic...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[style] ~297-~297: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...ally 2. Database WAL mode - Enabled automatically on first connection 3. **Python version...
(ADVERB_REPETITION_PREMIUM)
🪛 markdownlint-cli2 (0.18.1)
docs/PYTHON_314_ANALYSIS_SUMMARY.md
33-33: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
39-39: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
92-92: Unordered list indentation
Expected: 0; Actual: 3
(MD007, ul-indent)
93-93: Unordered list indentation
Expected: 0; Actual: 3
(MD007, ul-indent)
docs/PYTHON_314_THREAD_SAFETY_AUDIT.md
661-661: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
680-680: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
764-764: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
1006-1006: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/OLLAMA_INTEGRATION_SUMMARY.md
173-173: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
355-355: Bare URL used
(MD034, no-bare-urls)
356-356: Bare URL used
(MD034, no-bare-urls)
357-357: Bare URL used
(MD034, no-bare-urls)
docs/PARALLEL_LLM_FREE_THREADING_DESIGN.md
33-33: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
82-82: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
116-116: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
178-178: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
710-710: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
841-841: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/NETWORK_CONFIG.md
25-25: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
281-281: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
321-321: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/NETWORK_CONFIG_TESTS.md
30-30: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
docs/OLLAMA_SETUP.md
295-295: Bare URL used
(MD034, no-bare-urls)
296-296: Bare URL used
(MD034, no-bare-urls)
297-297: Bare URL used
(MD034, no-bare-urls)
298-298: Bare URL used
(MD034, no-bare-urls)
docs/PYTHON_314_COMPLETE_IMPLEMENTATION.md
417-417: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
424-424: Bare URL used
(MD034, no-bare-urls)
425-425: Bare URL used
(MD034, no-bare-urls)
426-426: Bare URL used
(MD034, no-bare-urls)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cortex/llm/interpreter.py (2)
124-167: Critical: Merge conflict in_get_system_promptmethod.The method has conflicting versions. The version with
simplifiedparameter support is more complete as it supports both full and simplified prompts for local models.🔎 Proposed fix
-<<<<<<< HEAD - def _get_system_prompt(self) -> str: - base_prompt = """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands. -======= def _get_system_prompt(self, simplified: bool = False) -> str: """Get system prompt for command interpretation. Args: simplified: If True, return a shorter prompt optimized for local models """ if simplified: return """You must respond with ONLY a JSON object. No explanations, no markdown, no code blocks. Format: {"commands": ["command1", "command2"]} Example input: install nginx Example output: {"commands": ["sudo apt update", "sudo apt install -y nginx"]} Rules: - Use apt for Ubuntu packages - Add sudo for system commands - Return ONLY the JSON object""" - return """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands. ->>>>>>> 344b109 (Add Ollama integration with setup script, LLM router support, and comprehensive documentation) + base_prompt = """You are a Linux system command expert. Convert natural language requests into safe, validated bash commands. Rules: 1. Return ONLY a JSON array of commands -2. Each command must be a safe, executable bash command -... +...remaining rules... if getattr(self, "system_prompt", ""): return f"{self.system_prompt}\n\n{base_prompt}" return base_prompt
258-271: Critical: Duplicate_call_fakemethod definition.The
_call_fakemethod is defined twice (lines 231-244 and 258-271). The second definition shadows the first. Remove one of the duplicate definitions.🔎 Proposed fix: Remove the duplicate
- def _call_fake(self, user_input: str) -> list[str]: - """Return predefined fake commands from environment for testing.""" - fake_commands_env = os.environ.get("CORTEX_FAKE_COMMANDS") - if not fake_commands_env: - raise RuntimeError("CORTEX_FAKE_COMMANDS environment variable not set") - - try: - data = json.loads(fake_commands_env) - commands = data.get("commands", []) - if not isinstance(commands, list): - raise ValueError("Commands must be a list in CORTEX_FAKE_COMMANDS") - return [cmd for cmd in commands if cmd and isinstance(cmd, str)] - except json.JSONDecodeError as e: - raise RuntimeError(f"Failed to parse CORTEX_FAKE_COMMANDS: {str(e)}")cortex/llm_router.py (1)
527-532: Thread-safety gap:reset_statsdoes not use the stats lock.
_update_statsandget_statsuse_stats_lockfor thread-safety, butreset_statsmodifies the same shared state without acquiring the lock. This could cause data races ifreset_statsis called concurrently with other stats operations.🔎 Proposed fix
def reset_stats(self): """Reset all usage statistics.""" - self.total_cost_usd = 0.0 - self.request_count = 0 - for provider in self.provider_stats: - self.provider_stats[provider] = {"requests": 0, "tokens": 0, "cost": 0.0} + with self._stats_lock: + self.total_cost_usd = 0.0 + self.request_count = 0 + for provider in self.provider_stats: + self.provider_stats[provider] = {"requests": 0, "tokens": 0, "cost": 0.0}
♻️ Duplicate comments (1)
cortex/llm_router.py (1)
958-971: Critical: Unresolved merge conflict markers still present.This issue was flagged in a previous review. The merge conflict markers at lines 958-971 must be resolved before merging.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
246-256: Consider movingreimport to module level.The
remodule is imported inside_repair_jsonand_parse_commands. Moving it to the top of the file aligns with PEP 8 conventions and avoids repeated import overhead.import json import os +import re import sqlite3 from enum import Enum
319-323: Debug output could leak sensitive content in production.The
print(..., file=sys.stderr)statement outputs raw LLM response content on parse failures. Consider using theloggingmodule withDEBUGlevel to control visibility, and truncate content to avoid log bloat.🔎 Proposed fix
+import logging + +logger = logging.getLogger(__name__) + ... except (json.JSONDecodeError, ValueError) as e: - # Log the problematic content for debugging - import sys - - print(f"\nDebug: Failed to parse JSON. Raw content:\n{content[:500]}", file=sys.stderr) + logger.debug("Failed to parse JSON. Raw content: %s", content[:200]) raise ValueError(f"Failed to parse LLM response: {str(e)}")
364-369: Missing docstring for public methodparse_with_context.Per coding guidelines, docstrings are required for all public APIs. Add a docstring describing the method's purpose, parameters, and return value.
🔎 Proposed fix
def parse_with_context( self, user_input: str, system_info: dict[str, Any] | None = None, validate: bool = True, ) -> list[str]: + """Parse user input with optional system context. + + Args: + user_input: Natural language command request. + system_info: Optional system context dictionary to enrich the request. + validate: If True, validate commands against dangerous patterns. + + Returns: + List of executable shell commands. + """ context = ""As per coding guidelines, docstrings are required for all public APIs.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/llm/interpreter.pycortex/llm_router.pyexamples/parallel_llm_demo.pytests/integration/test_end_to_end.py
🚧 Files skipped from review as they are similar to previous changes (1)
- examples/parallel_llm_demo.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/llm/interpreter.pytests/integration/test_end_to_end.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/integration/test_end_to_end.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (2)
cortex/roles/loader.py (1)
load_role(17-32)cortex/ask.py (1)
_initialize_client(180-201)
🔇 Additional comments (4)
cortex/llm_router.py (4)
171-191: LGTM: Ollama client initialization with graceful fallback.The initialization correctly handles failures by logging a warning and leaving the client as
None, which triggers the fallback logic inroute_task. The use of OpenAI-compatible API for Ollama is a clean approach.
428-475: LGTM:_complete_ollamaimplementation is solid.Good defensive coding with
getattrfor token counts (lines 455-456), and the error message provides helpful troubleshooting guidance about the Ollama service.
714-763: LGTM: Async Ollama implementation is consistent with sync version.The async implementation properly checks for client initialization and mirrors the synchronous version's behavior.
486-525: LGTM: Thread-safe stats tracking implementation.The use of
threading.Lockwith context managers in_update_statsandget_statsensures thread-safe access to shared statistics. This is important for Python 3.14 free-threading compatibility mentioned in the PR summary.
0e1a3c5 to
6091d6f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cortex/llm/interpreter.py (3)
42-52: Inconsistent Ollama model initialization.Line 50 hardcodes the Ollama model default to "llama3.2", but a dedicated
_get_ollama_model()method exists at lines 56-78 that reads from environment or config. For consistency, the Ollama branch should call this method.🔎 Proposed refactor
elif self.provider == APIProvider.OLLAMA: - self.model = "llama3.2" # Default Ollama model + # Try to load model from config or environment + self.model = self._get_ollama_model() elif self.provider == APIProvider.FAKE: self.model = "fake" # Fake provider doesn't use a real model
95-108: Critical: Duplicate and malformed provider initialization logic.The initialization logic for OLLAMA and FAKE providers is duplicated and malformed:
- Lines 95-98: Initialize Ollama with
client = Noneandollama_url- Lines 99-101: Initialize FAKE with
client = None- Lines 102-105: Unconditionally set up OpenAI client (appears to be for Ollama)
- Lines 106-108: Duplicate FAKE initialization
The OpenAI client setup at lines 102-105 has no condition and will execute regardless of provider, overwriting the client. This looks like an unresolved merge conflict.
🔎 Proposed fix: Remove duplicates and use OpenAI-compatible approach for Ollama
elif self.provider == APIProvider.OLLAMA: - # Ollama uses local HTTP API, no special client needed - self.ollama_url = os.environ.get("OLLAMA_HOST", "http://localhost:11434") - self.client = None # Will use requests - elif self.provider == APIProvider.FAKE: - # Fake provider uses predefined commands from environment - self.client = None # No client needed for fake provider - ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") - self.client = OpenAI( - api_key="ollama", base_url=f"{ollama_base_url}/v1" # Dummy key, not used - ) + # Ollama uses OpenAI-compatible API + try: + from openai import OpenAI + + ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + self.client = OpenAI( + api_key="ollama", + base_url=f"{ollama_base_url}/v1" + ) + except ImportError: + raise ImportError("OpenAI package not installed. Run: pip install openai") elif self.provider == APIProvider.FAKE: # Fake provider uses predefined commands from environment self.client = None # No client needed for fake provider
212-252: Critical: Duplicate_call_fakemethod definition.The
_call_fakemethod is defined twice (lines 212-225 and lines 239-252) with identical implementations. Python will only retain the second definition, but this duplication indicates an incomplete merge conflict resolution.🔎 Proposed fix: Remove the duplicate definition
Remove one of the duplicate definitions (keep lines 212-225 and remove lines 239-252):
+ def _repair_json(self, content: str) -> str: + """Attempt to repair common JSON formatting issues.""" + # Remove extra whitespace between braces and brackets + import re + + content = re.sub(r"\{\s+", "{", content) + content = re.sub(r"\s+\}", "}", content) + content = re.sub(r"\[\s+", "[", content) + content = re.sub(r"\s+\]", "]", content) + content = re.sub(r",\s*([}\]])", r"\1", content) # Remove trailing commas + return content.strip() + - def _call_fake(self, user_input: str) -> list[str]: - """Return predefined fake commands from environment for testing.""" - fake_commands_env = os.environ.get("CORTEX_FAKE_COMMANDS") - if not fake_commands_env: - raise RuntimeError("CORTEX_FAKE_COMMANDS environment variable not set") - - try: - data = json.loads(fake_commands_env) - commands = data.get("commands", []) - if not isinstance(commands, list): - raise ValueError("Commands must be a list in CORTEX_FAKE_COMMANDS") - return [cmd for cmd in commands if cmd and isinstance(cmd, str)] - except json.JSONDecodeError as e: - raise RuntimeError(f"Failed to parse CORTEX_FAKE_COMMANDS: {str(e)}") - def _parse_commands(self, content: str) -> list[str]:
♻️ Duplicate comments (1)
cortex/llm_router.py (1)
572-578: Same suggestion: Use expliciteliffor async routing.Same as the synchronous version at lines 290-296, prefer explicit
elifoverelsecomment for better error catching.🔎 Proposed refactor
if routing.provider == LLMProvider.CLAUDE: response = await self._acomplete_claude(messages, temperature, max_tokens, tools) elif routing.provider == LLMProvider.KIMI_K2: response = await self._acomplete_kimi(messages, temperature, max_tokens, tools) - else: # OLLAMA + elif routing.provider == LLMProvider.OLLAMA: response = await self._acomplete_ollama(messages, temperature, max_tokens, tools) + else: + raise ValueError(f"Unsupported provider: {routing.provider}")
🧹 Nitpick comments (1)
cortex/llm_router.py (1)
290-296: Use explicitelifcondition instead ofelsecomment.The
else: # OLLAMApattern assumes only three providers exist. For better maintainability and to catch unhandled providers, use an explicit condition:🔎 Proposed refactor
if routing.provider == LLMProvider.CLAUDE: response = self._complete_claude(messages, temperature, max_tokens, tools) elif routing.provider == LLMProvider.KIMI_K2: response = self._complete_kimi(messages, temperature, max_tokens, tools) - else: # OLLAMA + elif routing.provider == LLMProvider.OLLAMA: response = self._complete_ollama(messages, temperature, max_tokens, tools) + else: + raise ValueError(f"Unsupported provider: {routing.provider}")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/llm/interpreter.pycortex/llm_router.pyexamples/parallel_llm_demo.pytests/integration/test_end_to_end.py
🚧 Files skipped from review as they are similar to previous changes (2)
- examples/parallel_llm_demo.py
- tests/integration/test_end_to_end.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/llm/interpreter.py
🧬 Code graph analysis (2)
cortex/llm_router.py (2)
tests/test_ollama_integration.py (1)
main(175-228)scripts/setup_ollama.py (1)
main(385-503)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🔇 Additional comments (2)
cortex/llm_router.py (1)
196-204: LGTM: Thread-safety for stats tracking.The addition of
_stats_lockfor thread-safe stats updates is good defensive programming, especially relevant for Python 3.14 free-threading support mentioned in the PR summary.cortex/llm/interpreter.py (1)
25-40: LGTM: Role support integrated cleanly.The role initialization logic correctly loads role definitions from YAML and extracts the system prompt. This aligns with the PR objectives for custom AI roles/personas.
b74035e to
05bcda6
Compare
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
fbfd450 to
8eade25
Compare
8eade25 to
a774500
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
29-40: Critical: Role feature is incomplete -self.system_promptnever initialized.The
roleparameter is accepted but never used. Theload_roleimport at line 33 is never called. Consequently,self.system_promptis never set, but it's referenced at line 156, which will causeAttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'.Additionally, the docstring at lines 35-40 is placed after the import statement, so Python won't recognize it as a docstring.
🔎 Proposed fix
def __init__( self, api_key: str, provider: str = "openai", role: str = "default", model: str | None = None, cache: Optional["SemanticCache"] = None, ): + """Initialize the CommandInterpreter. + + Args: + api_key: API key for the LLM provider + provider: Provider name ("openai", "claude", or "ollama") + role: Role name for customizing system prompt behavior + model: Optional model name override + cache: Optional SemanticCache instance for response caching + """ from cortex.roles.loader import load_role - """Args: - api_key: API key for the LLM provider - provider: Provider name ("openai", "claude", or "ollama") - model: Optional model name override - cache: Optional SemanticCache instance for response caching - """ + # Load role configuration and extract system prompt + role_config = load_role(role) + self.system_prompt = role_config.get("system_prompt", "") + self.api_key = api_key self.provider = APIProvider(provider.lower())
♻️ Duplicate comments (3)
tests/integration/test_end_to_end.py (1)
113-115: Pre-run result is still discarded without verification.The result from the pre-run step on line 114 is immediately overwritten on line 119 without being checked. This issue was previously flagged but appears to still be present. If
test/run_all_tests.pyis expected to succeed, add an assertion before line 116. If it's only for side effects (setup), rename the variable to_or add a clarifying comment.🔎 Proposed fix if verification is needed
result = self._run("python test/run_all_tests.py", env=env) + self.assertTrue(result.succeeded(), msg=f"Pre-run failed: {result.stderr}") # Use PIP_BOOTSTRAP_DEV to install pytest and other dev dependenciesAlternatively, if the pre-run is only for side effects:
- result = self._run("python test/run_all_tests.py", env=env) + _ = self._run("python test/run_all_tests.py", env=env) # Side-effect only: prep containercortex/planner.py (2)
5-8: Expand docstring for public API.Per coding guidelines, comprehensive docstrings are required for public APIs. Add parameter descriptions, return value, and potential exceptions.
1-1: Critical: Invalid import fromtypingmodule.The import
from typing import Any, dict, listis incorrect. In Python 3.9+,dictandlistare built-in types and cannot be imported fromtyping. This will cause anImportErrorat runtime.🔎 Proposed fix
-from typing import Any, dict, list +from typing import Any
🧹 Nitpick comments (1)
cortex/llm/interpreter.py (1)
253-258: Markdown code block stripping may include language identifier.When the content is wrapped in markdown like
```json\n{"commands":...}\n```, splitting by```and takingparts[1]will include the language identifier (e.g.,json\n{"commands":...}). This should still work since the regex at line 261 will extract the JSON object, but consider stripping the first line if it's a language identifier for cleaner extraction.🔎 Proposed improvement
if "```" in content: parts = content.split("```") if len(parts) >= 2: content = parts[1] + # Remove language identifier if present (e.g., "json\n") + if content.startswith(("json", "JSON")): + content = content.split("\n", 1)[-1]
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
cortex/cli.pycortex/confirmation.pycortex/llm/interpreter.pycortex/planner.pycortex/roles/default.yamlcortex/roles/devops.yamlcortex/roles/loader.pycortex/roles/security.yamltest_parallel_llm.pytests/integration/test_end_to_end.py
🚧 Files skipped from review as they are similar to previous changes (7)
- cortex/roles/default.yaml
- cortex/roles/security.yaml
- cortex/roles/devops.yaml
- cortex/roles/loader.py
- cortex/confirmation.py
- test_parallel_llm.py
- cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/integration/test_end_to_end.pycortex/planner.pycortex/llm/interpreter.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/integration/test_end_to_end.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: All tests must pass before merging pull requests
Applied to files:
tests/integration/test_end_to_end.py
🧬 Code graph analysis (3)
tests/integration/test_end_to_end.py (1)
cortex/cli.py (1)
env(711-757)
cortex/planner.py (1)
cortex/llm/interpreter.py (2)
CommandInterpreter(18-351)parse(306-338)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🔇 Additional comments (3)
cortex/llm/interpreter.py (3)
131-159: LGTM on prompt construction logic.The simplified vs. full prompt distinction and the role-prefix logic are well-structured. The
base_promptconstruction is clear and follows good practices. However, this depends on the fix above to initializeself.system_prompt.
193-222: LGTM on Ollama integration.The enhanced input with explicit JSON format instructions helps local models produce structured output. Lower temperature and reduced tokens are sensible optimizations for focused responses. The error message is informative.
340-351: LGTM on signature formatting.The signature reformatting improves readability while maintaining the same functionality.
6a80eb7 to
3316be0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
25-68: Critical:self.system_promptis never initialized, causing pipeline failures.The
roleparameter andload_roleimport are added, but the role is never actually loaded andself.system_promptis never assigned. This causesAttributeErrorat line 156 when_get_system_prompt()checksif self.system_prompt:.Additionally, the docstring at lines 35-40 is misplaced after the import statement instead of immediately following the function signature.
🔎 Proposed fix
def __init__( self, api_key: str, provider: str = "openai", role: str = "default", model: str | None = None, cache: Optional["SemanticCache"] = None, ): + """Initialize CommandInterpreter. + + Args: + api_key: API key for the LLM provider + provider: Provider name ("openai", "claude", or "ollama") + role: Role name for specialized prompts (default: "default") + model: Optional model name override + cache: Optional SemanticCache instance for response caching + """ from cortex.roles.loader import load_role - """Args: - api_key: API key for the LLM provider - provider: Provider name ("openai", "claude", or "ollama") - model: Optional model name override - cache: Optional SemanticCache instance for response caching - """ + # Load role configuration and extract system prompt + role_config = load_role(role) + self.system_prompt = role_config.get("system_prompt", "") + self.api_key = api_key self.provider = APIProvider(provider.lower())
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/scripts/cla_check.pycortex/llm/interpreter.pycortex/planner.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/planner.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 GitHub Actions: CI
cortex/llm/interpreter.py
[error] 156-156: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'. Did you mean: '_get_system_prompt'?
[error] 184-184: Claude API call failed: 'CommandInterpreter' object has no attribute 'system_prompt'
[error] 176-176: OpenAI API call failed: 'CommandInterpreter' object has no attribute 'system_prompt'
[error] 311-311: parse() failed due to missing system_prompt attribute when building prompt: 'CommandInterpreter' object has no attribute 'system_prompt'
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (5)
.github/scripts/cla_check.py (1)
1-389: Formatting changes look good.The modifications in this file are cosmetic reformatting (single-line signatures, trailing commas, bot allowlist alignment) with no functional impact. The existing logic for CLA verification, GitHub API interactions, and bot detection remains correct.
cortex/llm/interpreter.py (4)
131-159: Prompt composition logic is correct, pendingself.system_promptinitialization.The approach of prepending the role-specific
system_promptto the base prompt is appropriate. Onceself.system_promptis properly initialized in__init__, this method will correctly merge role-specific prompts with the base instructions.
193-222: LGTM!The enhanced input with explicit JSON format instructions and the lower temperature setting (0.1) should help local Ollama models produce more consistent, parseable responses.
251-285: Robust JSON extraction approach.The regex-based extraction of
{"commands": [...]}is more reliable than prior approaches. One edge case: when stripping markdown code blocks (lines 254-258),parts[1]may include the language identifier (e.g.,json\n{"commands":...}), but the subsequent regex correctly handles this by searching for the JSON pattern within the content.
340-351: LGTM!The signature reformatting improves readability while maintaining the same behavior.
3316be0 to
facf4d7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (3)
cortex/planner.py (3)
11-37: Critical format mismatch between prompt and parser.The prompt instructs the LLM to return a plain JSON array (lines 25, 27-28):
- Return as a JSON list of strings Example: ["Install Python", "Install ML libraries", "Set up Jupyter"]However,
interpreter.parse()(line 37) calls_parse_commands(), which expects the format{"commands": [...]}(seecortex/llm/interpreter.pylines 265-269). This mismatch will cause aValueError: No valid JSON command block found.🔎 Proposed fix
Generate a step-by-step installation plan. Rules: - High-level steps only - No shell commands - One sentence per step -- Return as a JSON list of strings +- Return as a JSON object with a "commands" key Example: -["Install Python", "Install ML libraries", "Set up Jupyter"] +{{"commands": ["Install Python", "Install ML libraries", "Set up Jupyter"]}} """
6-9: Incomplete docstring for public API.The docstring lacks required documentation for parameters, return type, and exceptions. Per coding guidelines, all public APIs must have comprehensive docstrings.
🔎 Proposed fix
""" Generate a human-readable installation plan using LLM (Ollama). + + Args: + intent: High-level user intent describing what to install + slots: Dictionary of extracted details (e.g., packages, options) + + Returns: + List of installation step descriptions as strings + + Raises: + RuntimeError: If LLM call or parsing fails + ValueError: If response format is invalid + + Note: + Requires Ollama to be running locally (ollama serve). """As per coding guidelines, docstrings are required for all public APIs.
1-1: Critical: Cannot importdictandlistfromtyping.Line 1 attempts to import
dictandlistfrom thetypingmodule, which is incorrect. These are built-in types in Python 3.9+ and cannot be imported fromtyping. This will cause anImportErrorwhen the module is loaded.🔎 Proposed fix
-from typing import Any, dict, list +from typing import AnyThe type hints
dict[str, Any]andlist[str]used in the function signature (line 6) work directly with built-in types in Python 3.9+.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/scripts/cla_check.pycortex/llm/interpreter.pycortex/planner.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/planner.pycortex/llm/interpreter.py
🧬 Code graph analysis (2)
cortex/planner.py (1)
cortex/llm/interpreter.py (2)
CommandInterpreter(18-355)parse(310-342)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 GitHub Actions: CI
cortex/llm/interpreter.py
[error] 160-160: Test failure in pytest run: AttributeError: 'CommandInterpreter' object has no attribute 'system_prompt'. Did you mean: '_get_system_prompt'?
🪛 GitHub Actions: CLA Check
.github/scripts/cla_check.py
[error] 1-1: CLA check failed: 10 contributor(s) have not signed the CLA. Process exited with code 1 while running 'python .github/scripts/cla_check.py'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (4)
.github/scripts/cla_check.py (1)
11-11: LGTM! Clean formatting improvements.All changes in this file are formatting and style improvements that enhance readability and PEP 8 compliance:
- Function signatures reformatted for better readability
- Trailing commas added consistently
- Bot allowlist reformatted to one entry per line
No functional changes detected.
Also applies to: 89-89, 129-129, 137-137, 159-159, 177-177, 189-189, 201-201, 242-242, 260-260, 265-265, 270-270, 291-291, 304-304, 312-312, 338-348
cortex/llm/interpreter.py (3)
135-163: LGTM! Role-based system prompt correctly integrated.The refactored
_get_system_promptnow correctly incorporatesself.system_promptfrom the loaded role (lines 160-161), addressing the previous review comment. The implementation properly:
- Builds a base prompt based on the mode (simplified vs. full)
- Prepends the role-specific system prompt when available
- Falls back to the base prompt when no role prompt is defined
255-289: LGTM! Much more robust command parsing.The refactored
_parse_commandssignificantly improves reliability:
- Handles markdown code blocks (lines 257-261)
- Uses regex to extract JSON from mixed content (lines 265-269)
- Provides clear error messages with exception chaining (lines 288-289)
- Validates data types properly (lines 277-279, 283-284)
This addresses potential parsing failures with various LLM output formats.
344-349: LGTM! Improved signature formatting.The reformatted method signature improves readability with explicit
selfand properly formatted type hints using modern Python syntax.
facf4d7 to
0797187
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
1-356: Critical: Black formatting check failed in CI pipeline.The CI pipeline reports that this file requires Black formatting. This blocks the merge.
Run the following command to fix:
black --write cortex/llm/interpreter.pyThen commit the formatting changes.
♻️ Duplicate comments (1)
cortex/llm/interpreter.py (1)
34-39: Document the newroleparameter in the docstring.The
roleparameter (line 29) is not documented in the Args section. Per coding guidelines, docstrings are required for all public APIs and their parameters.🔎 Proposed fix
"""Args: api_key: API key for the LLM provider provider: Provider name ("openai", "claude", or "ollama") + role: Role name for specialized prompts (default: "default") model: Optional model name override cache: Optional SemanticCache instance for response caching """As per coding guidelines, docstrings are required for all public APIs.
🧹 Nitpick comments (1)
cortex/llm/interpreter.py (1)
345-350: Remove redundantselfparameter from signature annotation.Line 346 explicitly lists
selfin the method signature, which is non-idiomatic. In Python instance methods,selfis implicit and shouldn't be explicitly annotated in the signature (though it appears in the actual definition line 345).🔎 Proposed fix
def parse_with_context( - self, user_input: str, system_info: dict[str, Any] | None = None, validate: bool = True, ) -> list[str]:
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/scripts/cla_check.pycortex/llm/interpreter.pycortex/planner.py
🚧 Files skipped from review as they are similar to previous changes (2)
- cortex/planner.py
- .github/scripts/cla_check.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
🪛 GitHub Actions: CI
cortex/llm/interpreter.py
[error] 1-1: Command 'black --check . --exclude "(venv|.venv|build|dist)"' failed with exit code 1. 1 file would be reformatted: /home/runner/work/cortex/cortex/cortex/llm/interpreter.py. Run 'black --write' to fix.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/llm/interpreter.py (2)
129-164: LGTM! Role-specific system prompts now properly integrated.The method correctly incorporates
self.system_promptfrom the loaded role (lines 161-162), addressing the previous review concern about the role prompt being stored but unused. The implementation properly handles both simplified and full prompt modes.
256-290: LGTM! Robust JSON parsing with comprehensive validation.The implementation correctly handles markdown code blocks, uses regex with
re.DOTALLfor multiline JSON, validates the structure at each step, and provides clear error messages. The command cleaning logic (lines 282-287) properly filters and strips whitespace.
0797187 to
cfed7ed
Compare
cfed7ed to
f925558
Compare
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 CI check are failing, Kindly Update your Branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
cortex/llm/interpreter.py (1)
33-38: Document theroleparameter in the docstring.The
roleparameter added at line 29 is not documented in the Args section of the docstring, violating the coding guideline that requires docstrings for all public APIs with complete parameter documentation.🔎 Proposed fix
"""Args: api_key: API key for the LLM provider provider: Provider name ("openai", "claude", or "ollama") + role: Role name for specialized prompts (default: "default") model: Optional model name override cache: Optional SemanticCache instance for response caching """As per coding guidelines, docstrings are required for all public APIs.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm/interpreter.pycortex/planner.py
🚧 Files skipped from review as they are similar to previous changes (2)
- cortex/cli.py
- cortex/planner.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (4)
cortex/llm/interpreter.py (4)
39-46: LGTM!The role loading logic is clean and well-structured. The
load_rolefunction handles fallback to the default role internally, and allowingRoleNotFoundErrorto propagate for missing default roles is appropriate since the system cannot function without role definitions.
197-205: LGTM!The additional newline before the JSON format example improves prompt readability without affecting functionality.
255-289: LGTM!The
_parse_commandsimplementation is robust:
- Handles markdown code blocks
- Uses regex with
re.DOTALLto extract JSON reliably- Validates data structure and filters invalid entries
- Provides clear error messages
The parsing logic correctly handles the expected JSON format from all providers.
344-349: LGTM!The
parse_with_contextsignature is correctly defined with appropriate type hints and default values. The implementation properly enriches the user input with system context before delegating toparse.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 PR doesn’t follow contributing.md (AI usage disclosure, docs, and demo video are missing). Please update these and convert the PR to draft until then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 35-40: The constructor's docstring for the class __init__ (in
cortex/llm/interpreter.py) is missing documentation for the new role parameter;
update the Args section to include a short line for role (e.g., "role:
Optional[str] role name for messages or persona (e.g., 'system', 'user',
'assistant')") describing its type and purpose and noting any default behavior
so the public API docstring matches the signature.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
45-48: Consider renamingself.roletoself.role_dataorself.role_configfor clarity.The current naming where
self.role_namestores the role name (string) andself.rolestores the role dictionary is potentially confusing. Line 47 also overwrites theroleparameter (which was a string) with a dict, which reduces code clarity.♻️ Suggested naming improvement
# Load role and system prompt self.role_name = role - self.role = load_role(role) - self.system_prompt: str = self.role.get("system_prompt", "") + self.role_data = load_role(role) + self.system_prompt: str = self.role_data.get("system_prompt", "")
146-149: Simplify system prompt handling - remove unnecessarygetattr.The
getattron line 146 is unnecessary sinceself.system_promptis always initialized in__init__(line 48). Additionally, line 148 usesself.system_promptinstead of the local variablesystem_promptfrom line 146, creating inconsistency.♻️ Simplified version
- system_prompt = getattr(self, "system_prompt", "") - if system_prompt: + if self.system_prompt: return f"{self.system_prompt}\n\n{base_prompt}" return base_prompt
243-248: Consider more robust markdown code block removal.The current logic splits by
```and takesparts[1], which works for simple cases but might fail if there are multiple code blocks or unexpected formatting. Consider using regex to extract the content between code block markers more precisely.♻️ More robust alternative
# Remove markdown code blocks if present if "```" in content: - parts = content.split("```") - if len(parts) >= 2: - content = parts[1] + import re + # Extract content between first pair of code block markers + match = re.search(r'```(?:\w+)?\s*(.*?)\s*```', content, re.DOTALL) + if match: + content = match.group(1)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.py
🧠 Learnings (1)
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/roles/loader.py (1)
load_role(17-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (3)
cortex/llm/interpreter.py (3)
187-191: LGTM - Improved prompt formatting.Adding a blank line before the JSON format example improves visual separation and should help the LLM better distinguish the format instructions from the user input.
251-275: LGTM - Robust JSON extraction and validation.The regex-based JSON extraction with
re.DOTALL, comprehensive validation, and error handling with context wrapping is well-implemented. The filtering to only accept non-empty string commands is appropriate.
41-48: No action needed—the original concern does not apply to the current codebase.The review assumes that the
roleparameter is exposed via CLI and requires validation before passing toCommandInterpreter. However:
- No
--roleCLI argument exists in the argument parser- The
roleparameter inCortexCLI.__init__(line 40) is unused and never stored or passed toCommandInterpreterCommandInterpreteris instantiated at line 677 with onlyapi_keyandprovider, always using the hardcoded defaultrole="default"Since the role parameter is not exposed to CLI input, no CLI-level validation is necessary. The only way to trigger
RoleNotFoundErrorwould require direct code modification to pass a non-existent role or missing the default role file itself.Likely an incorrect or invalid review comment.
| """Args: | ||
| api_key: API key for the LLM provider | ||
| provider: Provider name ("openai", "claude", or "ollama") | ||
| model: Optional model name override | ||
| cache: Optional SemanticCache instance for response caching | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document the role parameter in the docstring.
The role parameter was added to the constructor signature but is not documented in the Args section of the docstring. According to the coding guidelines, docstrings are required for all public APIs.
📝 Add role parameter documentation
"""Args:
api_key: API key for the LLM provider
provider: Provider name ("openai", "claude", or "ollama")
+ role: Role name for specialized prompts (default: "default")
model: Optional model name override
cache: Optional SemanticCache instance for response caching
"""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Args: | |
| api_key: API key for the LLM provider | |
| provider: Provider name ("openai", "claude", or "ollama") | |
| model: Optional model name override | |
| cache: Optional SemanticCache instance for response caching | |
| """ | |
| """Args: | |
| api_key: API key for the LLM provider | |
| provider: Provider name ("openai", "claude", or "ollama") | |
| role: Role name for specialized prompts (default: "default") | |
| model: Optional model name override | |
| cache: Optional SemanticCache instance for response caching | |
| """ |
🤖 Prompt for AI Agents
In @cortex/llm/interpreter.py around lines 35 - 40, The constructor's docstring
for the class __init__ (in cortex/llm/interpreter.py) is missing documentation
for the new role parameter; update the Args section to include a short line for
role (e.g., "role: Optional[str] role name for messages or persona (e.g.,
'system', 'user', 'assistant')") describing its type and purpose and noting any
default behavior so the public API docstring matches the signature.
|
@pavanimanchala53 This PR does not meet our contribution standards (no demo video, tests, docs, or proper AI usage disclosure per contributing.md). Closing this one and Feel free to create a new PR only after a proper self-review and cleanup. |



Fixes cortexlinux/cortex-pro#1
This PR introduces support for custom AI roles/personas for specialized tasks in Cortex.
Key changes:
--roleCLI flag (default:default)default,security,devopssystem_promptis merged with existing safety rulesThe role system is applied consistently across all supported providers
(OpenAI, Claude, and Ollama) and is designed to be easily extended in the future. In
Summary by CodeRabbit
New Features
Enhancements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.