-
-
Notifications
You must be signed in to change notification settings - Fork 50
feat: implement NL parser for command execution #439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: implement NL parser for command execution #439
Conversation
📝 WalkthroughWalkthroughAdds domain- and install-mode-aware intent extraction and robust multi-provider command parsing to the LLM CommandInterpreter; introduces NL Installer docs and tests; and extends CortexCLI with stdin-aware prompts, confidence-driven clarification flows, interactive confirm/edit/execute UX, and an expanded install(...) signature. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant CLI as CortexCLI
participant CI as CommandInterpreter
participant Prov as Provider (OpenAI / Claude / Ollama / FAKE)
participant System as Local System
U->>CLI: "install <natural language>"
CLI->>CLI: _build_prompt_with_stdin(user_input)
CLI->>CI: extract_intent(user_input)
CI->>Prov: send intent prompt (domain/install_mode aware)
Prov-->>CI: response (text / JSON-like)
CI->>CI: _parse_intent_from_text / _parse_commands
CI-->>CLI: intent dict + commands
CLI->>U: show understanding + confidence
alt low/medium confidence
CLI->>U: ask clarifying question(s)
U-->>CLI: clarification
CLI->>CI: extract_intent(clarified_input)
else high confidence
CLI->>U: show commands (confirm / edit)
alt execute (--execute)
CLI->>System: run commands
else dry-run / cancel
CLI->>U: finish without executing
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧠 Learnings (4)📓 Common learnings📚 Learning: 2026-01-05T20:26:18.626ZApplied to files:
📚 Learning: 2026-01-12T20:51:13.828ZApplied to files:
📚 Learning: 2026-01-12T20:51:13.828ZApplied to files:
🔇 Additional comments (17)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (5)
docs/docs/nl-installer.md (1)
1-18: Consider expanding documentation to cover additional NL installer capabilities.The documentation covers basic usage well. However, per the linked issue #248 acceptance criteria, the NL installer should also support:
- AI reasoning display ("I understood you want...")
- Confidence scoring
- Typo tolerance
- Graceful ambiguity handling with clarifying questions
Consider adding sections documenting these features once implemented, or noting them as future enhancements.
tests/test_nl_installer.py (1)
5-21: Test coverage is insufficient per acceptance criteria.Issue #248 requires 10+ natural language test cases. Currently there's only one test case. Consider adding tests for:
- Web server requests ("I need a web server")
- Python dev environment setup
- Docker/Kubernetes installation
- Ambiguous requests
- Typo tolerance
- Edge cases with unknown requests
Would you like me to generate additional test cases to meet the acceptance criteria?
cortex/cli.py (2)
34-36: Add return type hint per coding guidelines.The function is missing a return type annotation.
🔎 Proposed fix
-def _is_interactive(): - return sys.stdin.isatty() +def _is_interactive() -> bool: + """Check if stdin is connected to an interactive terminal.""" + return sys.stdin.isatty()
655-656: Remove duplicate comment line.The comment
# ---------- User confirmation ----------appears twice consecutively.🔎 Proposed fix
- # ---------- User confirmation ---------- # ---------- User confirmation ---------- if execute:cortex/llm/interpreter.py (1)
531-563: Remove unused_estimate_confidencemethod or integrate it into the intent extraction flow.The
_estimate_confidencemethod is defined but never called anywhere in the codebase. Either integrate it into the intent extraction logic or remove it to eliminate dead code.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/cli.pycortex/llm/interpreter.pydocs/docs/nl-installer.mdtests/test_nl_installer.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_nl_installer.pycortex/cli.pycortex/llm/interpreter.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_nl_installer.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
tests/test_nl_installer.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (2)
tests/test_nl_installer.py (1)
cortex/llm/interpreter.py (1)
parse(461-519)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/llm/interpreter.py (2)
parse(461-519)extract_intent(565-576)
🪛 GitHub Actions: CI
tests/test_nl_installer.py
[error] 1-1: Ruff: I001 Import block is un-sorted or un-formatted.
🪛 GitHub Check: lint
tests/test_nl_installer.py
[failure] 21-21: Ruff (W292)
tests/test_nl_installer.py:21:40: W292 No newline at end of file
[failure] 1-2: Ruff (I001)
tests/test_nl_installer.py:1:1: I001 Import block is un-sorted or un-formatted
🪛 GitHub Check: Lint
tests/test_nl_installer.py
[failure] 21-21: Ruff (W292)
tests/test_nl_installer.py:21:40: W292 No newline at end of file
[failure] 1-2: Ruff (I001)
tests/test_nl_installer.py:1:1: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (6)
cortex/cli.py (3)
44-54: LGTM!The
_build_prompt_with_stdinmethod is well-implemented with a clear docstring and proper handling of optional stdin data.
620-633: Prompt construction for Python mode aligns with best practices.The logic correctly guides Python/ML requests to use pip and virtualenvs while avoiding sudo, which aligns with the coding guideline about no silent sudo execution.
587-597: Fake provider bypasses audit logging.Per coding guidelines, audit logging to
~/.cortex/history.dbis required for all package operations. The fake provider path returns early without recording to InstallationHistory. Consider whether this is intentional for testing or if minimal logging should be added.cortex/llm/interpreter.py (3)
42-44: LGTM!Good defensive initialization of the Ollama base URL, ensuring it's available when needed by provider-specific methods.
377-440: Robust command parsing implementation looks good.The
_parse_commandsmethod handles multiple JSON formats, markdown code blocks, and common LLM output quirks. The error handling with debug output to stderr is helpful for troubleshooting.
275-303: Good fallback handling in_parse_intent_from_text.The method gracefully returns a safe default intent when parsing fails, avoiding exceptions propagating up. The minimal validation approach is appropriate.
f99331b to
580ff14
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
cortex/cli.py (2)
587-597: Fix hardcoded success message (duplicate issue).The hardcoded
"docker installed successfully!"message on line 595 is misleading when other software is requested. This was flagged in a previous review.🔎 Proposed fix
if execute: - print("\ndocker installed successfully!") + print(f"\n{software} installed successfully!")
657-702: Add logging for auto-approval in non-interactive mode (duplicate issue).Auto-approval at line 660 occurs silently without warning or audit trail. Since generated commands may contain
sudooperations, this violates the security requirement for explicit confirmation. This issue was flagged in a previous review.Add a warning before auto-approving:
if not _is_interactive(): # Non-interactive mode (pytest / CI) → auto-approve cx_print("⚠️ Auto-approving in non-interactive mode (CI/pytest)", "warning") choice = "y"Based on learnings, no silent sudo execution is allowed - require explicit user confirmation.
cortex/llm/interpreter.py (3)
204-242: Fix JSON format typo (duplicate issue).Line 237 has a typo in the JSON format specification:
"install_mode" "..."should be"install_mode": "..."(missing colon). This was flagged in a previous review.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, "confidence": 0.0 }
261-273: Add error handling for JSON parsing (duplicate issue).Line 273 calls
json.loads(content)without try/except. If the LLM returns malformed JSON, this will raise an unhandledJSONDecodeError. This was flagged in a previous review.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: - response = self.client.chat.completions.create( - model=self.model, - messages=[ - {"role": "system", "content": self._get_intent_prompt()}, - {"role": "user", "content": user_input}, - ], - temperature=0.2, - max_tokens=300, - ) - - content = response.choices[0].message.content.strip() - return json.loads(content) + try: + response = self.client.chat.completions.create( + model=self.model, + messages=[ + {"role": "system", "content": self._get_intent_prompt()}, + {"role": "user", "content": user_input}, + ], + temperature=0.2, + max_tokens=300, + ) + + content = response.choices[0].message.content.strip() + return self._parse_intent_from_text(content) + except (json.JSONDecodeError, Exception) as e: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": f"Failed to extract intent: {str(e)}", + "ambiguous": True, + "confidence": 0.0, + }
565-576: Add FAKE provider support (duplicate issue).The
extract_intentmethod raisesValueErrorfor unsupported providers, butFAKEis a validAPIProviderused in tests. Ifextract_intentis called with a FAKE provider, it will fail. This was flagged in a previous review.🔎 Proposed fix
def extract_intent(self, user_input: str) -> dict: if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty") if self.provider == APIProvider.OPENAI: return self._extract_intent_openai(user_input) elif self.provider == APIProvider.CLAUDE: raise NotImplementedError("Intent extraction not yet implemented for Claude") elif self.provider == APIProvider.OLLAMA: return self._extract_intent_ollama(user_input) + elif self.provider == APIProvider.FAKE: + # Return a default intent for testing + return { + "action": "install", + "domain": "unknown", + "install_mode": "system", + "description": user_input, + "ambiguous": False, + "confidence": 1.0, + } else: raise ValueError(f"Unsupported provider: {self.provider}")
🧹 Nitpick comments (7)
cortex/cli.py (3)
34-36: Add return type hint.The coding guidelines require type hints for Python code. Add
-> boolto the function signature.🔎 Proposed fix
-def _is_interactive(): +def _is_interactive() -> bool: return sys.stdin.isatty()As per coding guidelines, type hints are required in Python code.
44-54: Clarify stdin_data source and add return type hint.The method accesses
self.stdin_databut it's not clear where this attribute is set. Consider adding a comment or expanding the docstring to explain when and howstdin_datais populated. Also, add the missing return type hint-> str.🔎 Proposed fix
- def _build_prompt_with_stdin(self, user_prompt: str) -> str: + def _build_prompt_with_stdin(self, user_prompt: str) -> str: """ Combine optional stdin context with user prompt. + + The stdin_data attribute should be set externally before calling this method + if stdin context is available (e.g., from piped input). """As per coding guidelines, type hints are required and docstrings should be clear about attribute dependencies.
122-189: Add return type hint and consider public API for saving config.Missing return type hint
-> int. Also, while the comment at line 148 acknowledges using the private_save_config()method, consider adding a publicsave_config()method toNotificationManagerfor better API design.🔎 Proposed fix for type hint
- def notify(self, args): + def notify(self, args) -> int: """Handle notification commands"""As per coding guidelines, type hints are required in Python code.
cortex/llm/interpreter.py (4)
42-44: Consider consolidating Ollama URL logic.The Ollama base URL is read here and again at line 116. While this defensive initialization is good, consider storing it once in a class attribute to avoid duplication and ensure consistency.
🔎 Potential approach
Store the URL once during initialization and reuse it in
_initialize_clientinstead of re-reading the environment variable.
162-202: Add return type hint and consider more specific exception handling.Missing return type hint
-> dict. Also, the bareexcept Exception:at line 194 is very broad. Consider catching specific exceptions likeurllib.error.URLError,json.JSONDecodeError, orTimeoutErrorfor better debugging and error reporting.🔎 Proposed fix
- def _extract_intent_ollama(self, user_input: str) -> dict: + def _extract_intent_ollama(self, user_input: str) -> dict: import urllib.error import urllib.request # ... existing code ... - except Exception: + except (urllib.error.URLError, json.JSONDecodeError, TimeoutError) as e: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", - "description": "Failed to extract intent", + "description": f"Failed to extract intent: {type(e).__name__}", "ambiguous": True, "confidence": 0.0, }As per coding guidelines, type hints are required in Python code.
275-303: Add return type hint.Missing return type hint
-> dict. Otherwise, the implementation is robust with good fallback handling.🔎 Proposed fix
- def _parse_intent_from_text(self, text: str) -> dict: + def _parse_intent_from_text(self, text: str) -> dict: """ Extract intent JSON from loose LLM output. No semantic assumptions. """As per coding guidelines, type hints are required in Python code.
531-563: Add return type hint.Missing return type hint
-> float. The confidence estimation logic is reasonable and avoids hardcoded package mappings, which aligns with the PR objectives.🔎 Proposed fix
- def _estimate_confidence(self, user_input: str, domain: str) -> float: + def _estimate_confidence(self, user_input: str, domain: str) -> float: """ Estimate confidence score without hardcoding meaning. Uses simple linguistic signals. """As per coding guidelines, type hints are required in Python code.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/cli.pycortex/llm/interpreter.pydocs/docs/nl-installer.mdtests/test_nl_installer.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/docs/nl-installer.md
- tests/test_nl_installer.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (4)
cortex/cli.py (2)
567-570: LGTM: Early API key validation.Good improvement! Validating the API key earlier prevents unnecessary processing and provides clearer error messages to users.
611-633: LGTM: Intent-based prompt construction.Excellent flow! The code:
- Extracts intent to determine install mode
- Adapts prompts based on context (Python vs system installations)
- Correctly guides toward virtualenv for Python packages and avoids sudo
This aligns well with the PR's goal of standardizing prompt construction.
cortex/llm/interpreter.py (2)
147-160: LGTM: Improved system prompt clarity.The expanded prompt with explicit rules, format, and example is a good improvement. The explicit "Add sudo for system commands" guidance helps ensure proper privilege escalation.
378-440: LGTM: Robust command parsing.Excellent improvements! The enhanced parser handles multiple output formats (strict JSON, loose output, markdown code blocks) and provides helpful debug information on failures. This addresses the PR's goal of ensuring deterministic command generation.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 How do we maintain this random ID's for Rollback ?

The ID is persisted in the installation history store and acts as a stable reference, not an ephemeral value. |
this is handled by the existing InstallationHistory system My changes only generate commands ;persistance rollback behavior were already implemented |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Please demonstrate how OpenAI, Anthropic, and Ollama models respond to your prompt in three separate videos.
Always make sure Video are attached to PR description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
cortex/cli.py (1)
589-599: Hardcoded success message is misleading.The fake provider prints
"docker installed successfully!"regardless of the actual software requested. This was previously flagged and remains unresolved.🔎 Proposed fix
print("\nGenerated commands:") for i, cmd in enumerate(commands, 1): print(f" {i}. {cmd}") if execute: - print("\ndocker installed successfully!") + print(f"\n{software} installed successfully!") return 0cortex/llm/interpreter.py (2)
261-273: Add error handling for JSON parsing.The method calls
json.loads(content)without error handling (line 273). If the LLM returns malformed JSON, this will raise an unhandledJSONDecodeError. This was previously flagged and remains unresolved.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: - response = self.client.chat.completions.create( - model=self.model, - messages=[ - {"role": "system", "content": self._get_intent_prompt()}, - {"role": "user", "content": user_input}, - ], - temperature=0.2, - max_tokens=300, - ) - - content = response.choices[0].message.content.strip() - return json.loads(content) + try: + response = self.client.chat.completions.create( + model=self.model, + messages=[ + {"role": "system", "content": self._get_intent_prompt()}, + {"role": "user", "content": user_input}, + ], + temperature=0.2, + max_tokens=300, + ) + + content = response.choices[0].message.content.strip() + return self._parse_intent_from_text(content) + except Exception as e: + return { + "action": "unknown", + "domain": "unknown", + "description": f"Failed to extract intent: {str(e)}", + "ambiguous": True, + "confidence": 0.0, + "install_mode": "system", + }
565-576: Add FAKE provider support in extract_intent.The method raises
ValueErrorfor unsupported providers, butFAKEis a validAPIProvider(defined at line 15). Ifextract_intentis called with a FAKE provider (e.g., in tests), it will fail. This was previously flagged and remains unresolved.🔎 Proposed fix
def extract_intent(self, user_input: str) -> dict: if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty") if self.provider == APIProvider.OPENAI: return self._extract_intent_openai(user_input) elif self.provider == APIProvider.CLAUDE: raise NotImplementedError("Intent extraction not yet implemented for Claude") elif self.provider == APIProvider.OLLAMA: return self._extract_intent_ollama(user_input) + elif self.provider == APIProvider.FAKE: + # Return a default intent for testing + return { + "action": "install", + "domain": "unknown", + "install_mode": "system", + "description": user_input, + "ambiguous": False, + "confidence": 1.0, + } else: raise ValueError(f"Unsupported provider: {self.provider}")
🧹 Nitpick comments (7)
cortex/cli.py (3)
35-36: Add return type hint.The function is missing a return type annotation. Per coding guidelines, type hints are required in Python code.
🔎 Proposed fix
-def _is_interactive(): +def _is_interactive() -> bool: return sys.stdin.isatty()
45-54: Add type hints to method signature.The method signature lacks type annotations for the parameter and return value. Per coding guidelines, type hints are required.
🔎 Proposed fix
- def _build_prompt_with_stdin(self, user_prompt: str) -> str: + def _build_prompt_with_stdin(self, user_prompt: str) -> str:Wait, the type hints are already present on line 45. Let me check again...
Actually, looking at line 45, I see
def _build_prompt_with_stdin(self, user_prompt: str) -> str:- the type hints ARE present. This is correct. No issue here.
123-123: Add type hints to method signature.The method signature lacks type annotations for the parameter and return value. Per coding guidelines, type hints are required.
🔎 Proposed fix
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: """Handle notification commands"""cortex/llm/interpreter.py (4)
162-202: Add return type hint to method.The method signature lacks a return type annotation. Per coding guidelines, type hints are required.
🔎 Proposed fix
- def _extract_intent_ollama(self, user_input: str) -> dict: + def _extract_intent_ollama(self, user_input: str) -> dict[str, Any]:Note: You'll need to import
Anyfromtypingif not already imported (it is on line 5).
204-242: Add return type hint to method.The method signature lacks a return type annotation.
🔎 Proposed fix
- def _get_intent_prompt(self) -> str: + def _get_intent_prompt(self) -> str:Wait, checking line 204 again... it already has
-> str. No issue here.
275-303: Add type hints to method signature.The method signature lacks type annotations. Per coding guidelines, type hints are required.
🔎 Proposed fix
- def _parse_intent_from_text(self, text: str) -> dict: + def _parse_intent_from_text(self, text: str) -> dict[str, Any]:
531-563: Remove minimum confidence floor for ambiguous inputs.Line 561 enforces a minimum score of 0.2 (
score = max(score, 0.2)). Previous feedback suggested removing this lower bound so that truly unclear inputs can receive very low confidence scores, avoiding false certainty.Additionally, past feedback noted that the heuristics don't correlate with correctness. Consider documenting that this is a UI hint for ambiguity detection, not a true confidence measure.
🔎 Proposed fix
- # Clamp to [0.0, 1.0] - # Ensure some minimal confidence for valid text - score = max(score, 0.2) - - return round(min(1.0, score), 2) + # Clamp to [0.0, 1.0] - allow very low scores for unclear input + return round(max(0.0, min(1.0, score)), 2)Based on past review comments, the artificial floor can mask unclear or ambiguous requests.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/validators.py (1)
validate_install_request(117-144)cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/llm/interpreter.py (1)
parse(461-519)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/cli.py (2)
658-713: Auto-approval logging differs from previous feedback.The code now logs auto-approval via
logger.info(line 666-668), which addresses the audit trail concern. However, previous feedback specifically requested a visible warning message (e.g.,cx_print("⚠️ Auto-approving...", "warning")) to make the auto-approval transparent to users.The current
logger.infoapproach may not be visible unless verbose logging is enabled, whereas the previous suggestion would always display the warning.Consider whether the current logging level is sufficient for user transparency, or if a visible warning is still needed as originally suggested.
Based on learnings, audit logging requirements should be met, but user-visible warnings for auto-approval enhance transparency.
613-614: Error handling forextract_intentis already properly managed by the outer try-except block.The try block at line 609 includes exception handlers for all possible exceptions raised by
extract_intent:ValueError(includingjson.JSONDecodeError, which is a subclass ofValueError),RuntimeError,OSError, and a final catch-allexcept Exceptionclause that handlesNotImplementedErrorand any unexpected errors. No unhandled exception paths exist.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
426-444: Dead code: object array handling is unreachable.The condition at line 426
if isinstance(commands, list)will always be true (sincedata.get("commands", [])defaults to a list), and the early return at line 427 makes lines 429-444 unreachable. The original intent to handle[{"command": "cmd1"}]format is broken.🔎 Proposed fix - integrate both formats in one block
# First attempt: strict JSON data = json.loads(json_blob) commands = data.get("commands", []) - if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: # 1. ["cmd1", "cmd2"] - direct string array # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + if not isinstance(commands, list): + raise ValueError("Commands must be a list") + result = [] for cmd in commands: if isinstance(cmd, str): - # Direct string - if cmd: + if cmd.strip(): result.append(cmd) elif isinstance(cmd, dict): # Object with "command" key cmd_str = cmd.get("command", "") - if cmd_str: + if cmd_str and cmd_str.strip(): result.append(cmd_str) return result
♻️ Duplicate comments (1)
cortex/llm/interpreter.py (1)
575-586: FAKE provider still not supported inextract_intent.As flagged in a previous review,
APIProvider.FAKEis valid but not handled here, causingValueErrorin tests. Add a FAKE handler or document the limitation. Also, per coding guidelines, public methods require docstrings.🔎 Proposed fix
def extract_intent(self, user_input: str) -> dict: + """Extract structured intent from natural language input. + + Args: + user_input: Natural language request from user + + Returns: + Dictionary containing action, domain, install_mode, etc. + + Raises: + ValueError: If input is empty + NotImplementedError: If provider doesn't support intent extraction + """ if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty") if self.provider == APIProvider.OPENAI: return self._extract_intent_openai(user_input) elif self.provider == APIProvider.CLAUDE: raise NotImplementedError("Intent extraction not yet implemented for Claude") elif self.provider == APIProvider.OLLAMA: return self._extract_intent_ollama(user_input) + elif self.provider == APIProvider.FAKE: + # Return a mock intent for testing + return { + "action": "install", + "domain": "unknown", + "install_mode": "system", + "description": user_input, + "ambiguous": False, + "confidence": 1.0, + } else: raise ValueError(f"Unsupported provider: {self.provider}")
🧹 Nitpick comments (2)
cortex/llm/interpreter.py (2)
42-44: Minor redundancy in Ollama URL handling.
self.ollama_urlis set here but_initialize_client(line 116) and_call_ollama(line 353) independently read from the environment variable. Consider consolidating to useself.ollama_urlconsistently throughout the class.
261-283: Good error handling, add docstring.The try/except wrapper properly handles API and JSON parsing failures. Per coding guidelines, add a docstring to document the method's purpose.
🔎 Proposed docstring
def _extract_intent_openai(self, user_input: str) -> dict: + """Extract intent from user input using OpenAI API. + + Args: + user_input: Natural language request from user + + Returns: + Dictionary containing extracted intent fields + """ try:
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (1)
cortex/llm/interpreter.py (1)
147-161: LGTM!The system prompt is well-structured with clear rules, format specification, and a helpful example.
6d607ea to
7f5340e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/cli.py (1)
556-563: Unused parameterassume_yes.The
assume_yesparameter is declared but never used in the method body. Either remove it or implement the intended functionality (likely to skip confirmation prompts).🔎 Suggested implementation or removal
If intended for auto-confirmation:
if not _is_interactive(): + if not assume_yes: + self._print_error("Non-interactive mode requires --yes flag") + return 1 choice = "y"Or remove if not needed:
def install( self, software: str, execute: bool = False, dry_run: bool = False, parallel: bool = False, - assume_yes: bool = False, ):
♻️ Duplicate comments (5)
cortex/llm/interpreter.py (4)
162-202: Missing docstring andinstall_modein fallback (unresolved).Per prior review and coding guidelines:
- Docstring is required for this method
- The fallback dict (lines 196-202) is missing
install_mode, which is validated as required in_parse_intent_from_text🔎 Proposed fix
def _extract_intent_ollama(self, user_input: str) -> dict: + """Extract intent from user input using Ollama API. + + Args: + user_input: Natural language request from user + + Returns: + Dictionary containing extracted intent fields + """ import urllib.error import urllib.requestexcept Exception: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
233-241: JSON format example still has syntax issues (unresolved).Per prior review, the JSON example is missing commas between key-value pairs. This invalid JSON could affect LLM output quality.
Format: { - "action": "...", - "domain": "...", - "install_mode": "..." - "description": "...", - "ambiguous": true/false, - "confidence": 0.0 + "action": "...", + "domain": "...", + "install_mode": "...", + "description": "...", + "ambiguous": true, + "confidence": 0.0 }
306-313: Fallback dict missinginstall_modefield (unresolved).Per prior review, the fallback return is missing
install_mode, which is validated as required in the success path (line 298).# If parsing fails, do NOT guess meaning return { "action": "unknown", "domain": "unknown", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
569-573: Minimum score floor still causes false certainty (unresolved).Per prior review, the
max(score, 0.2)floor should be removed so unclear input can result in very low scores. The current implementation masks truly ambiguous requests.- # Clamp to [0.0, 1.0] - # Ensure some minimal confidence for valid text - score = max(score, 0.2) - - return round(min(1.0, score), 2) + # Clamp to [0.0, 1.0] + return round(max(0.0, min(1.0, score)), 2)cortex/cli.py (1)
657-661: Missing auto-approval warning/logging in non-interactive mode.The previous review flagged silent auto-approval in non-interactive mode and was marked as addressed. However, the current code still auto-approves without any warning or audit trail. Per retrieved learnings, "No silent sudo execution - require explicit user confirmation."
Based on learnings, add a warning before auto-approving:
if not _is_interactive(): + cx_print("⚠️ Auto-approving in non-interactive mode (CI/pytest)", "warning") choice = "y"
🧹 Nitpick comments (1)
cortex/cli.py (1)
679-690: Add instructions for the command editing flow.The editing loop expects an empty line to finish, but users aren't informed of this. Consider adding guidance:
edited_commands = [] + print("Enter commands one per line. Press Enter on an empty line to finish:") while True: line = input("> ").strip()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pycortex/cli.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (10)
cortex/cli.py (6)
35-37: LGTM!Simple helper to detect interactive stdin. Correctly uses
sys.stdin.isatty().
569-573: LGTM!Good early validation - fails fast with a clear error message when API key is missing.
588-600: LGTM!Fake provider flow correctly generates commands using
CommandInterpreterand exits early. The hardcoded success message issue from prior review has been addressed.
613-635: Well-designed intent-aware prompt construction.The logic correctly differentiates between Python and system installs:
- Python mode instructs pip/virtualenv usage and explicitly avoids sudo
- System mode uses the standard install prompt
This aligns with the learned guideline about avoiding silent sudo execution.
123-189: LGTM!The
notifymethod handles subcommands cleanly with proper validation. Time format validation was added per prior feedback. The use of_save_config()is acknowledged in comments.
45-55: No action needed for type hints; return type is already present.The
_build_prompt_with_stdinmethod already includes the required return type hint (-> str) on line 45. The code complies with PEP 8 and coding guidelines.If desired, consider initializing
self.stdin_data: str | None = Nonein__init__for explicit clarity, though the current defensive use ofgetattrworks correctly.Likely an incorrect or invalid review comment.
cortex/llm/interpreter.py (4)
42-44: LGTM!Defensive initialization of Ollama base URL. Correctly reads from environment with sensible default.
147-161: LGTM!System prompt is well-structured with clear rules and a valid JSON format example.
261-283: LGTM!Error handling was added per prior review. The method now catches all exceptions and returns a safe fallback dict with
install_modeincluded.
387-450: LGTM!Robust command parser with good handling of multiple formats (strict JSON, loose Ollama output). Helpful debug output on parse failures.
7f5340e to
79f2c3e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
cortex/llm/interpreter.py (4)
162-202: Missing docstring andinstall_modein fallback return.Per coding guidelines, docstrings are required for all public APIs. Additionally, the fallback dict (lines 196-202) is missing
install_mode, which_parse_intent_from_textvalidates as a required field—this inconsistency can cause downstream issues.🔎 Proposed fix
def _extract_intent_ollama(self, user_input: str) -> dict: + """Extract intent from user input using Ollama API. + + Args: + user_input: Natural language request from user + + Returns: + Dictionary containing extracted intent fields + """ import urllib.error import urllib.requestexcept Exception: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
233-241: JSON format example still has syntax issues.The JSON example is missing commas between key-value pairs (line 237 after
"install_mode": "..."and line 238 after"description": "..."). Invalid JSON in the prompt can degrade LLM output quality.🔎 Proposed fix
Format: { - "action": "...", - "domain": "...", - "install_mode": "..." - "description": "...", - "ambiguous": true/false, - "confidence": 0.0 + "action": "...", + "domain": "...", + "install_mode": "...", + "description": "...", + "ambiguous": true, + "confidence": 0.0 }
306-313: Fallback dict still missinginstall_modefield.The fallback return in
_parse_intent_from_textis missinginstall_mode, which is validated as required in the success path (line 298). Callers expect this field to always be present.🔎 Proposed fix
# If parsing fails, do NOT guess meaning return { "action": "unknown", "domain": "unknown", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
569-573: Minimum score floor still causes false certainty.Per prior feedback, the
max(score, 0.2)floor masks truly ambiguous requests. For unclear input, the score should be allowed to fall to 0.0 so downstream logic can distinguish genuinely low-confidence cases.🔎 Proposed fix
- # Clamp to [0.0, 1.0] - # Ensure some minimal confidence for valid text - score = max(score, 0.2) - - return round(min(1.0, score), 2) + # Clamp to [0.0, 1.0] + return round(max(0.0, min(1.0, score)), 2)
🧹 Nitpick comments (1)
cortex/cli.py (1)
657-660: Auto-approval in non-interactive mode lacks audit trail.When
_is_interactive()returnsFalse, commands are auto-approved without any log or warning. Per learnings, sudo execution should have explicit confirmation. While this may be intentional for CI, consider adding at least a log message so auto-approval decisions are auditable:🔎 Proposed improvement
if not _is_interactive(): + logger.info("Auto-approving in non-interactive mode (CI/pytest)") choice = "y"Based on learnings: "No silent sudo execution - require explicit user confirmation" and "Implement audit logging to ~/.cortex/history.db for all package operations".
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pycortex/cli.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (9)
cortex/llm/interpreter.py (4)
575-596: FAKE provider support added — looks good.The
extract_intentmethod now correctly handlesAPIProvider.FAKEby returning a deterministic intent dict suitable for testing. This addresses the prior concern about FAKE provider raisingValueError.
541-568: Clarity estimation logic is reasonable for UI hints.The heuristics (input length, presence of install-intent words, vague words, domain check) provide a useful signal. The rename to
_estimate_clarityappropriately reflects that this is a rough linguistic signal rather than a true confidence measure.
414-427: Robust JSON extraction and command filtering.The JSON isolation logic and filtering of empty/non-string commands improves resilience against malformed LLM output. Implementation looks correct.
261-283: Error handling added for OpenAI intent extraction.The method now catches exceptions and returns a safe fallback dict including
install_mode. This addresses the prior concern about unhandledJSONDecodeError.cortex/cli.py (5)
35-36: LGTM — standard interactive detection.Using
sys.stdin.isatty()is the idiomatic approach for detecting interactive terminals.
569-572: Early API key validation is good practice.Fail-fast approach ensures clear error messaging before attempting command generation.
588-599: Fake provider flow correctly uses dynamic software name.The success message now uses
{software}instead of hardcoded "docker". This addresses the prior feedback.
622-635: Install mode differentiation is well-designed.Python mode explicitly instructs "Do NOT use sudo" which aligns with the principle of avoiding unnecessary privilege escalation. The mode-specific prompt construction improves command generation accuracy.
674-699: Edit command flow is well-implemented.The interactive editing correctly:
- Blocks editing in non-interactive mode (line 675-677)
- Collects commands until empty line (line 682-683)
- Validates non-empty command list (line 686-688)
- Requires explicit confirmation before execution (line 696-699)
5da41c2 to
54a355e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
429-444: Remove unreachable dead code.Lines 429-444 are unreachable because line 427 returns early when
commandsis a list. Sincedata.get("commands", [])at line 424 defaults to an empty list, theisinstance(commands, list)check at line 426 will always beTrue, making the code below it dead code.🔎 Proposed fix
if isinstance(commands, list): return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: - # 1. ["cmd1", "cmd2"] - direct string array - # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array - result = [] - for cmd in commands: - if isinstance(cmd, str): - # Direct string - if cmd: - result.append(cmd) - elif isinstance(cmd, dict): - # Object with "command" key - cmd_str = cmd.get("command", "") - if cmd_str: - result.append(cmd_str) - - return result except (json.JSONDecodeError, ValueError) as e:If you need to support the object format
[{"command": "cmd1"}], incorporate that handling into the list comprehension at line 427 instead.
♻️ Duplicate comments (5)
cortex/llm/interpreter.py (3)
196-202: Fallback dict still missinginstall_modefield (unresolved).Per a previous review comment, the fallback return at lines 196-202 is missing the required
install_modefield. This field is validated in_parse_intent_from_text(line 298) and callers expect it to always be present.🔎 Proposed fix
except Exception: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
233-241: JSON format example still has syntax issues (unresolved).The JSON example is missing commas between key-value pairs, making it invalid JSON. This was flagged in a previous review but remains unfixed.
🔎 Proposed fix
Format: { - "action": "...", - "domain": "...", - "install_mode": "..." - "description": "...", - "ambiguous": true/false, - "confidence": 0.0 + "action": "...", + "domain": "...", + "install_mode": "...", + "description": "...", + "ambiguous": true, + "confidence": 0.0 }
306-313: Fallback dict still missinginstall_modefield (unresolved).The fallback return was marked as addressed in commit 7f5340e, but the
install_modefield is still absent. Line 298 validates that this field must be present in the success path, so the fallback should include it for consistency.🔎 Proposed fix
# If parsing fails, do NOT guess meaning return { "action": "unknown", "domain": "unknown", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }cortex/cli.py (2)
45-54:stdin_datais never populated—stdin context feature remains non-functional (unresolved).This issue was flagged in a previous review but remains unfixed. The
stdin_dataattribute is never initialized or assigned anywhere in the codebase. Thegetattrat line 49 will always returnNone, so the condition at line 50 never executes, effectively disabling stdin context augmentation.To fix: initialize
self.stdin_data = Nonein__init__and populate it early in theinstallmethod by reading fromsys.stdinwhen data is available (e.g., whennot sys.stdin.isatty()andselect.select([sys.stdin], [], [], 0)[0]).
658-660: Auto-approval in non-interactive mode still lacks logging (unresolved).A previous review requested adding a warning message before auto-approving in non-interactive mode (line 660), marked as addressed in commit 6da228d. However, no warning is present in the current code. Per the retrieved learnings, silent sudo execution should be avoided, and auto-approval decisions should be auditable.
🔎 Proposed fix
if execute: if not _is_interactive(): + cx_print("⚠️ Auto-approving in non-interactive mode (CI/pytest)", "warning") choice = "y"Based on learnings, ...
🧹 Nitpick comments (1)
cortex/llm/interpreter.py (1)
42-44: Remove redundant Ollama URL initialization.The Ollama base URL is initialized here and again in
_initialize_clientat line 116. Since_initialize_clientis called immediately after (line 70), this early assignment is redundant.🔎 Proposed fix
- # ✅ Defensive Ollama base URL initialization - if self.provider == APIProvider.OLLAMA: - self.ollama_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") - if cache is None:Alternatively, if you need
self.ollama_urlbefore_initialize_client, move the assignment from line 116 here instead.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pycortex/cli.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
parse(471-529)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (5)
cortex/llm/interpreter.py (3)
147-160: LGTM! System prompt is clear and well-structured.The updated prompt provides clear rules for command generation with appropriate constraints.
261-283: LGTM! Error handling is properly implemented.The try-except wrapper correctly handles potential failures and returns a complete fallback dict with all required fields.
570-591: LGTM! FAKE provider support is properly implemented.The method now handles all providers including FAKE, with appropriate input validation and fallback behavior.
cortex/cli.py (2)
35-36: LGTM! Standard interactive check implementation.This helper correctly uses
sys.stdin.isatty()to detect interactive terminals.
589-599: LGTM! Fake provider now correctly references the requested software.The success message at line 597 properly uses the
softwarevariable instead of a hardcoded string.
54a355e to
eb70535
Compare
eb70535 to
3a3a0cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Fix all issues with AI Agents 🤖
In @tests/test_nl_parser_cases.py:
- Line 62: Add a trailing newline at the end of tests/test_nl_parser_cases.py to
satisfy EOF newline requirement (W292); simply ensure the file ends with a
single '\n' after the final statement (the line containing "assert commands") so
the file terminates with a newline.
- Around line 1-3: The import block is unsorted and includes an unused standard
library import; remove the unused "os" import, then reorder the remaining
imports so third-party "pytest" comes before the application import "from
cortex.llm.interpreter import CommandInterpreter" (standard-library imports
should appear first, then third-party, then local/application imports).
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
426-444: Unreachable code after early return.Lines 429-444 are unreachable. The code at lines 426-427 already returns if
commandsis a list, so the subsequent loop that processes thecommandslist can never execute.🔎 Proposed fix - remove unreachable code
if isinstance(commands, list): return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: - # 1. ["cmd1", "cmd2"] - direct string array - # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array - result = [] - for cmd in commands: - if isinstance(cmd, str): - # Direct string - if cmd: - result.append(cmd) - elif isinstance(cmd, dict): - # Object with "command" key - cmd_str = cmd.get("command", "") - if cmd_str: - result.append(cmd_str) - - return result + return []Alternatively, if you intend to support both string arrays and object arrays, modify the check:
- if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - - # Handle both formats... + if isinstance(commands, list): + # Handle both formats: + # 1. ["cmd1", "cmd2"] - direct string array + # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + result = [] + for cmd in commands: + if isinstance(cmd, str) and cmd.strip(): + result.append(cmd) + elif isinstance(cmd, dict): + cmd_str = cmd.get("command", "") + if cmd_str: + result.append(cmd_str) + return result
♻️ Duplicate comments (4)
cortex/llm/interpreter.py (4)
162-202: Missing docstring andinstall_modefield in fallback return.Per coding guidelines, docstrings are required for public APIs. Additionally, the fallback dict (lines 196-202) is still missing
install_mode, which_parse_intent_from_textvalidates as a required field (line 298). This inconsistency could cause downstream issues.🔎 Proposed fix
def _extract_intent_ollama(self, user_input: str) -> dict: + """Extract intent from user input using Ollama API. + + Args: + user_input: Natural language request from user + + Returns: + Dictionary containing extracted intent fields + """ import urllib.error import urllib.requestexcept Exception: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
233-241: JSON format example still has syntax issues.The JSON example is missing a comma after
"install_mode": "..."(line 237). While the colon fix was applied, the example remains invalid JSON which could affect LLM output quality.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode": "..." + "install_mode": "...", "description": "...", - "ambiguous": true/false, + "ambiguous": true, "confidence": 0.0 }
568-568: Critical: Syntax error causes method to always return 2.Line 568 has incorrect parentheses:
round(max(0.0, min(1.0, score), 2))callsmax()with three arguments (0.0, the min result, and 2), which always returns 2. Theround()then returns 2.0, making this method always return 2.0 regardless of input.🔎 Proposed fix
- return round(max(0.0, min(1.0, score), 2)) + return round(max(0.0, min(1.0, score)), 2)
306-313: Fallback dict still missinginstall_modefield.The fallback return at lines 307-313 is missing
install_mode, though the success path (line 298) validates it as required. The past comment was marked as addressed but the fix appears to be missing from this code.🔎 Proposed fix
# If parsing fails, do NOT guess meaning return { "action": "unknown", "domain": "unknown", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
🧹 Nitpick comments (6)
cortex/llm/interpreter.py (1)
580-589: Minor: Contradictoryambiguous=Truewithconfidence=1.0in FAKE provider.The FAKE provider returns
ambiguous: Truealongsideconfidence: 1.0, which is semantically contradictory. For testing purposes, consider either settingambiguous: False(since confidence is maximum) or lowering confidence to reflect ambiguity.🔎 Proposed fix
elif self.provider == APIProvider.FAKE: # Return a default intent for testing return { "action": "install", "domain": "unknown", "install_mode": "system", "description": user_input, - "ambiguous": True, - "confidence": 1.0, + "ambiguous": False, + "confidence": 1.0, }tests/test_nl_parser_cases.py (1)
6-62: Tests don't validate NL parsing behavior.All tests use the same hardcoded
CORTEX_FAKE_COMMANDSand only assert that commands are returned (non-empty list). Since the fake provider always returns the same commands regardless of input, these tests don't validate the actual NL parsing logic (typo tolerance, ambiguity handling, domain detection, etc.).Consider adding tests that:
- Use the real interpreter with mocked LLM responses to test parsing logic
- Verify
extract_intent()returns expected intent fields for different inputs- Test edge cases like malformed JSON responses
Example for testing intent extraction:
def test_extract_intent_returns_expected_fields(fake_interpreter): intent = fake_interpreter.extract_intent("install docker") assert "action" in intent assert "domain" in intent assert "install_mode" in intent assert intent["action"] == "install"cortex/cli.py (4)
54-70: Missing blank line between methods.PEP 8 recommends two blank lines between top-level definitions and one blank line between method definitions. Line 54-55 transition directly from
_build_prompt_with_stdinto_is_ambiguous_requestwithout the required blank line.return user_prompt + def _is_ambiguous_request(self, user_input: str, intent: dict | None) -> bool:
687-688: Auto-approval in non-interactive mode lacks audit trail.The non-interactive path silently sets
choice = "y"without logging or warning. Per retrieved learnings, sudo execution should require explicit confirmation and audit logging. While past review suggested adding a warning message here, the current code doesn't include one.Based on learnings, consider adding visibility:
if not _is_interactive(): + logger.info("Auto-approving commands in non-interactive mode (CI/pytest)") + cx_print("⚠️ Auto-approving in non-interactive mode", "warning") choice = "y"
708-712: Consider adding usage hint for edit mode.The edit mode reads lines until an empty line, but this isn't communicated to the user. Consider adding a brief instruction.
elif choice == "e": if not _is_interactive(): self._print_error("Cannot edit commands in non-interactive mode") return 1 + print("Enter commands (one per line, empty line to finish):") edited_commands = [] while True: line = input("> ").strip()
149-154: Add docstring fornotifymethod.Per coding guidelines, docstrings are required for public APIs. The
notifymethod has a comment but not a proper docstring.def notify(self, args): - """Handle notification commands""" + """Handle notification commands. + + Args: + args: Namespace containing notify_action and related arguments + + Returns: + int: Exit code (0 for success, 1 for failure) + """
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm/interpreter.pytests/test_nl_parser_cases.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_nl_parser_cases.pycortex/llm/interpreter.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_nl_parser_cases.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🪛 GitHub Actions: CI
tests/test_nl_parser_cases.py
[error] 1-1: Ruff check failed: Import block is unsorted or un-formatted (I001).
🪛 GitHub Check: lint
tests/test_nl_parser_cases.py
[failure] 62-62: Ruff (W292)
tests/test_nl_parser_cases.py:62:20: W292 No newline at end of file
[failure] 1-3: Ruff (I001)
tests/test_nl_parser_cases.py:1:1: I001 Import block is un-sorted or un-formatted
🪛 GitHub Check: Lint
tests/test_nl_parser_cases.py
[failure] 62-62: Ruff (W292)
tests/test_nl_parser_cases.py:62:20: W292 No newline at end of file
[failure] 1-3: Ruff (I001)
tests/test_nl_parser_cases.py:1:1: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (3)
cortex/cli.py (3)
45-54:stdin_datainitialization verified.The
_build_prompt_with_stdinmethod correctly usesgetattr(self, "stdin_data", None)with a default, which safely handles the case whenstdin_datais not set. Past review noted this was addressed.
614-624: LGTM! Fake provider path correctly implemented.The fake provider flow generates and displays commands appropriately, and the success message now uses the
softwarevariable instead of hardcoded text.
638-663: Good intent-aware prompt construction.The flow appropriately extracts intent to determine
install_mode, then tailors the prompt accordingly. Python mode explicitly avoids sudo/system package managers, which aligns with best practices for Python environment isolation.
bac538e to
ee7567b
Compare
ee7567b to
6bc5f87
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
446-464: Unreachable code: object array handling is dead.Lines 449-463 are unreachable. After the check at line 446, if
commandsis a list (which it will always be sincedata.get("commands", [])returns a list), the function returns at line 447. The subsequent code handling{"command": "..."}object format will never execute.🔧 Proposed fix to handle both formats
- if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - - # Handle both formats: - # 1. ["cmd1", "cmd2"] - direct string array - # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + if not isinstance(commands, list): + raise ValueError("Commands must be a list") + + # Handle both formats: + # 1. ["cmd1", "cmd2"] - direct string array + # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array result = [] for cmd in commands: if isinstance(cmd, str): - # Direct string - if cmd: + if cmd.strip(): result.append(cmd) elif isinstance(cmd, dict): - # Object with "command" key cmd_str = cmd.get("command", "") - if cmd_str: + if cmd_str and cmd_str.strip(): result.append(cmd_str) return result
🤖 Fix all issues with AI agents
In @cortex/cli.py:
- Around line 927-928: Wrap the input() call that assigns clarified (input("What
would you like to install? ").strip()) in a try/except that catches EOFError and
KeyboardInterrupt, and handle them the same way docker_permissions does (e.g.,
treat as empty input or bail out gracefully without raising), then proceed only
if clarified is non-empty; apply the same try/except pattern to the other
input() at the later block (lines 955-956) to avoid unhandled
EOF/KeyboardInterrupts.
- Around line 162-271: The helper methods (_generate_understanding_message,
_generate_clarifying_questions, _generate_clarification_request,
_generate_suggestions) always hit their fallback because _call_llm_for_text has
a provider comparison bug; fix _call_llm_for_text so it correctly detects
supported providers (e.g., compare same types/values—string vs enum—or normalize
to lower-case strings) and returns the actual LLM content when available instead
of immediately returning fallback; ensure _call_llm_for_text returns None or
empty only on real failures and that callers above continue to pass their
prompt/system_message/temperature/max_tokens/fallback arguments unchanged.
In @cortex/llm/interpreter.py:
- Around line 584-613: Add a clear docstring to the public method extract_intent
describing its purpose (extracts intent metadata from a user_input string),
parameters (user_input: str), return value (dict with keys like action, domain,
install_mode, description, ambiguous, confidence), and possible exceptions
(raises ValueError on empty input or unsupported provider); also mention
provider-specific behavior (OPENAI, CLAUDE, OLLAMA, FAKE) and that FAKE can use
CORTEX_FAKE_INTENT env var to override the returned dict. Ensure the docstring
is placed immediately under def extract_intent(...) and follows the project's
docstring style (short summary, args, returns, raises) so downstream callers and
automated docs can rely on it.
- Around line 302-330: The fallback returned by _parse_intent_from_text is
missing the install_mode key which can cause downstream KeyError; update the
fallback dict returned at the end of _parse_intent_from_text to include
"install_mode": "unknown" (and ensure the fallback contains the same keys
validated earlier: "action", "domain", "install_mode", "ambiguous",
"confidence", plus "description") so its structure matches the fallbacks used by
_extract_intent_openai and _extract_intent_claude.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (1)
187-227: Missing docstring for_extract_intent_ollama.As per coding guidelines, all public and private methods should have docstrings explaining their purpose. While this is an internal method, adding a brief docstring would improve maintainability.
📝 Suggested docstring
def _extract_intent_ollama(self, user_input: str) -> dict: + """Extract user intent using local Ollama instance. + + Args: + user_input: Natural language request from user + + Returns: + Intent dictionary with action, domain, confidence, etc. + """ import urllib.error import urllib.requestcortex/cli.py (2)
88-93: Comment-code mismatch in confidence level logic.The comment at lines 91-92 says "Low: domain is known but confidence is very low (< 0.5)" but line 93 returns
"medium", not"low". This is confusing. Based on retrieved learnings, this appears intentional (domain identification is sufficient for medium confidence), but the comment should be updated to match the actual behavior.📝 Proposed fix
- # Low: domain is known but confidence is very low (< 0.5) - # Ask for clarifying questions rather than complete re-spec + # Medium: domain is known but specifics are vague (confidence < 0.5) + # Still show understanding and ask clarifying questions return "medium"
971-971: Tautological condition:if _is_interactive() or not _is_interactive().This condition is always
True(A or not A). Simplify by removing the condition entirely.🔧 Proposed fix
- # High confidence: proceed directly - # Generate natural understanding message - if _is_interactive() or not _is_interactive(): # Always show understanding - understanding_msg = self._generate_understanding_message( - interpreter, intent, software - ) - print(f"\n{understanding_msg}\n") + # High confidence: proceed directly + # Generate natural understanding message + understanding_msg = self._generate_understanding_message( + interpreter, intent, software + ) + print(f"\n{understanding_msg}\n")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.pycortex/cli.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.pycortex/cli.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.pycortex/cli.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Do not use silent sudo - require explicit user confirmation for privilege escalation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/llm/interpreter.py (1)
CommandInterpreter(20-613)cortex/branding.py (1)
cx_print(49-69)
🔇 Additional comments (9)
cortex/llm/interpreter.py (4)
44-46: LGTM! Defensive Ollama URL initialization.Good addition to ensure
self.ollama_urlis always set when the Ollama provider is used, preventing potentialAttributeErrorin_extract_intent_ollama.
112-185: LGTM! Well-structured domain-aware system prompt.The prompt construction properly handles domain context, provides clear JSON formatting guidance, and includes sensible rules for pip vs apt package management. The separation between simplified (for local models) and full prompts is appropriate.
229-259: LGTM! Clear intent extraction prompt.The confidence scoring guidelines (HIGH 0.8-1.0, MEDIUM 0.5-0.8, LOW 0.0-0.5) align well with the CLI's
_get_confidence_levelthresholds. The prompt clearly specifies JSON-only output format.
491-550: LGTM! Domain propagation is consistent.The
parsemethod properly:
- Documents the new
domainparameter in the docstring- Includes domain context in the cache key via the system prompt
- Propagates domain to all provider-specific call methods
cortex/cli.py (5)
41-42: LGTM! Simple interactive detection.Clean helper function that checks if stdin is a TTY. Used appropriately throughout the install flow.
50-55: Potential blocking on stdin read during initialization.Reading stdin eagerly in
__init__could block indefinitely if the CLI is instantiated with a pipe that hasn't been closed by the writer. Consider adding a note or timeout, or deferring the read until actually needed.This is likely acceptable for CLI usage patterns where stdin is typically either fully available or not used. Verify that this doesn't cause issues in test harnesses or unusual invocation scenarios.
1024-1072: User confirmation flow is well-structured but lacks input validation.The confirmation flow with Yes/Edit/No options is user-friendly. However:
- Auto-approval in non-interactive mode (line 1029) could be risky - consider requiring explicit
--yesflag instead- Edited commands at line 1058 bypass the dangerous pattern validation that
_validate_commandsperformsConsider whether edited commands should also go through
_validate_commandsbefore execution for safety:commands = edited_commands + # Validate edited commands for dangerous patterns + commands = interpreter._validate_commands(commands)
2875-2880: LGTM! Main entry point correctly uses defaults.The
main()function callscli.install()without the new internal parameters (api_key,provider,skip_clarification), allowing the method to use auto-detection. The new parameters are correctly designed for internal recursive calls during clarification flows.
998-999: Missing domain propagation toparse()call.The
interpreter.parse()call doesn't pass thedomainparameter, even though domain information is available from the extracted intent (used at line 1018). This loses the domain context that was added to the interpreter.🔧 Proposed fix
+ domain = intent.get("domain") + # Parse commands from prompt - commands = interpreter.parse(prompt) + commands = interpreter.parse(prompt, domain=domain)⛔ Skipped due to learnings
Learnt from: pavanimanchala53 Repo: cortexlinux/cortex PR: 439 File: cortex/cli.py:62-73 Timestamp: 2026-01-05T20:26:18.626Z Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
| clarified = input("What would you like to install? ").strip() | ||
| if clarified: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing EOFError/KeyboardInterrupt handling on input() calls.
The input() calls at lines 927 and 955 lack exception handling for EOFError and KeyboardInterrupt. While _is_interactive() guards these paths, edge cases (e.g., closed pipe, Ctrl+D) could still trigger exceptions. Compare with the proper handling at lines 322-326 in docker_permissions.
🔧 Proposed fix for line 927
print(f"\n{clarification_msg}\n")
- clarified = input("What would you like to install? ").strip()
+ try:
+ clarified = input("What would you like to install? ").strip()
+ except (EOFError, KeyboardInterrupt):
+ console.print()
+ cx_print("Operation cancelled", "info")
+ return 0Also applies to: 955-956
🤖 Prompt for AI Agents
In @cortex/cli.py around lines 927 - 928, Wrap the input() call that assigns
clarified (input("What would you like to install? ").strip()) in a try/except
that catches EOFError and KeyboardInterrupt, and handle them the same way
docker_permissions does (e.g., treat as empty input or bail out gracefully
without raising), then proceed only if clarified is non-empty; apply the same
try/except pattern to the other input() at the later block (lines 955-956) to
avoid unhandled EOF/KeyboardInterrupts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Address this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
444-466: Unreachable code: object array format handling is never executed.The early
returnon line 449 makes lines 451-466 unreachable. If the LLM returns commands in object format ([{"command": "cmd1"}]), this code would return an empty list instead of extracting the commands.🐛 Proposed fix
- # First attempt: strict JSON - data = json.loads(json_blob) - commands = data.get("commands", []) - - if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: # 1. ["cmd1", "cmd2"] - direct string array # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + data = json.loads(json_blob) + commands = data.get("commands", []) + if not isinstance(commands, list): + raise ValueError("Commands must be a list") + result = [] for cmd in commands: if isinstance(cmd, str): - # Direct string - if cmd: + if cmd.strip(): result.append(cmd) elif isinstance(cmd, dict): - # Object with "command" key cmd_str = cmd.get("command", "") - if cmd_str: + if cmd_str and cmd_str.strip(): result.append(cmd_str) return result
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 586-615: Add a docstring to the public method extract_intent
describing its purpose, parameters, return value, and raised exceptions;
specifically document that extract_intent(self, user_input: str) parses user
input into an intent dict (fields like action, domain, install_mode,
description, ambiguous, confidence), list supported providers (OPENAI, CLAUDE,
OLLAMA, FAKE) and behavior for the FAKE provider (reads CORTEX_FAKE_INTENT or
returns a default intent), and note that it raises ValueError for empty
user_input or unsupported provider; keep it concise and follow project docstring
style (short summary, Args, Returns, Raises).
- Around line 280-302: The fallback dict returned on errors must be consistent
across intent extraction/parsing functions; add the missing "install_mode" key
(use the same default value "system") and ensure all other keys match the
structure returned by _extract_intent_openai (action, domain, description,
ambiguous, confidence, install_mode). Update _extract_intent_ollama and the
fallback path in _parse_intent_from_text so their exception/edge-case returns
include install_mode="system" and the same keys and types as
_extract_intent_openai to avoid KeyError downstream.
🧹 Nitpick comments (2)
cortex/llm/interpreter.py (2)
44-46: Consider consolidating Ollama URL initialization.The
ollama_urlattribute is set here only whenprovider == OLLAMA, but_initialize_client(line 101) also readsOLLAMA_BASE_URLindependently. This creates two separate initialization paths for the same configuration. If the environment variable changes between these calls or if the logic diverges, they could become inconsistent.Consider initializing
ollama_urlunconditionally and reusing it in_initialize_client:♻️ Suggested consolidation
- # ✅ Defensive Ollama base URL initialization - if self.provider == APIProvider.OLLAMA: - self.ollama_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + # Initialize Ollama URL for any provider that might need it + self.ollama_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434")Then in
_initialize_client:- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + ollama_base_url = self.ollama_url self.client = OpenAI( api_key="ollama", base_url=f"{ollama_base_url}/v1", )
215-229: Add logging for failed intent extraction.Per coding guidelines, operations should be logged to
~/.cortex/history.dbfor audit purposes. The exception handler silently swallows errors, making debugging difficult. Consider logging the failure before returning the fallback.♻️ Suggested improvement
except Exception: - # True failure → unknown intent + # True failure → unknown intent; consider logging for audit + import logging + logging.warning("Ollama intent extraction failed", exc_info=True) return { "action": "unknown", "domain": "unknown",Also, the bare
except Exceptionis broad. Consider catching more specific exceptions (urllib.error.URLError,json.JSONDecodeError,TimeoutError) to avoid masking unexpected errors.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (6)
cortex/llm/interpreter.py (6)
231-261: LGTM!The intent extraction prompt is well-structured with clear confidence scoring guidelines and a well-defined JSON response format. The separation of domain identification from package specificity is a good design choice for handling ambiguous requests.
304-332: LGTM!The JSON extraction logic with fallback handling is appropriate. The validation of required structural keys (action, domain, install_mode, ambiguous, confidence) ensures consistent downstream processing. The fallback to a safe unknown state on parse failure is the right approach.
493-552: LGTM!The
parsemethod is well-documented with comprehensive Args/Returns/Raises sections. The domain parameter propagation is consistent across all provider calls, and the cache integration properly isolates entries by domain context. Silent cache write failures are appropriate for resilience.
564-584: LGTM!The Claude intent extraction implementation mirrors the OpenAI variant with consistent error handling and proper API usage.
263-278: LGTM!The domain parameter propagation across
_call_openai,_call_claude, and_call_ollamais consistent. The enhanced error message in_call_ollama(including URL and model info) improves debuggability.Also applies to: 334-347, 349-380
142-148: Sudo confirmation properly enforced in execution layer.The system prompt instructs the LLM to generate sudo commands for system-level operations. This is acceptable since it's command generation, not execution. The execution layer correctly implements the safety requirement through:
- Dry-run by default—commands show as "Would run" without executing
- Explicit
--executeflag required to enable actual execution- Interactive confirmation prompt (lines 316-319) that requires user acknowledgment before running sudo commands, unless the
--yesflag bypasses confirmation for automationThe code follows the documented safety standard "No silent sudo execution" and complies with the coding guideline requiring explicit user confirmation for privilege escalation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
449-467: Critical: Early return makes dict-format handling unreachable.The early return at lines 449-450 filters
commandsto only string elements. If the LLM returns the object array format[{"command": "apt update"}], this returns an empty list and the handling code at lines 455-467 is never reached.Proposed fix: Remove the early return and let the existing loop handle both formats
- if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: # 1. ["cmd1", "cmd2"] - direct string array # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + if not isinstance(commands, list): + raise ValueError("Commands must be a list") + result = [] for cmd in commands: if isinstance(cmd, str): # Direct string - if cmd: + if cmd and cmd.strip(): result.append(cmd) elif isinstance(cmd, dict): # Object with "command" key cmd_str = cmd.get("command", "") - if cmd_str: + if cmd_str and cmd_str.strip(): result.append(cmd_str) return result
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 326-333: The fallback return in the intent parsing failure (inside
_extract_intent) is missing the install_mode key; update the returned dict to
include "install_mode": "unknown" (matching other fallback shapes like
_extract_intent_ollama) so callers receive a consistent schema with keys action,
domain, description, ambiguous, confidence and install_mode.
- Around line 222-230: The fallback return inside the generic exception handler
currently omits the install_mode key, causing downstream KeyError; update the
except Exception fallback return (the dict returned in that except block) to
include "install_mode" with the same default placeholder used by
_extract_intent_openai and _extract_intent_claude (e.g., "unknown" or None) so
the structure matches those functions.
🧹 Nitpick comments (1)
cortex/llm/interpreter.py (1)
44-46: Consider consolidating Ollama URL handling.The
self.ollama_urlinstance variable is set here but_initialize_client(line 101) and_call_ollamaerror handling (line 376) define localollama_base_urlvariables. While functionally correct (all read from the same env var with same default), consolidating to useself.ollama_urlthroughout would improve consistency.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (8)
cortex/llm/interpreter.py (8)
112-185: LGTM!The domain-aware prompt construction is well-designed. The shared
common_rulesblock reduces duplication, and the domain context integration cleanly constrains command generation to relevant packages.
232-262: LGTM!The intent extraction prompt is well-structured with clear confidence scoring criteria. The guidance distinguishing domain identification from package specificity is a good design choice for handling ambiguous requests.
264-279: LGTM!Domain parameter cleanly propagated to system prompt generation.
281-303: LGTM!Appropriate configuration for intent extraction with lower temperature for more deterministic outputs.
335-348: LGTM!Domain parameter cleanly integrated.
350-381: LGTM!Domain parameter correctly integrated with simplified prompt for local model efficiency.
494-553: LGTM!Domain parameter properly threaded through the parse flow with updated docstring. Cache key implicitly incorporates domain via the system prompt, ensuring domain-specific caching.
565-585: LGTM!Consistent implementation with proper fallback structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
448-469: Unreachable code path for dict-format commands.The early return at lines 451-452 prevents the dict-format handling logic at lines 454-468 from ever executing. If
commandsis a list (which it always will be from valid JSON), the function returns immediately, making the "Handle both formats" section dead code.If dict-format support is intended (e.g.,
[{"command": "cmd1"}]), the logic needs restructuring:🐛 Proposed fix
data = json.loads(json_blob) commands = data.get("commands", []) - if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - # Handle both formats: # 1. ["cmd1", "cmd2"] - direct string array # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + if not isinstance(commands, list): + raise ValueError("commands must be a list") + result = [] for cmd in commands: if isinstance(cmd, str): - # Direct string - if cmd: - result.append(cmd) + if cmd.strip(): + result.append(cmd.strip()) elif isinstance(cmd, dict): - # Object with "command" key cmd_str = cmd.get("command", "") - if cmd_str: - result.append(cmd_str) + if cmd_str and cmd_str.strip(): + result.append(cmd_str.strip()) return result
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 328-335: The fallback return in the parse failure branch of the
interpreter (the dict with action/domain/description/ambiguous/confidence) is
missing the install_mode key required by the validation at
validate_parsed_intent (and present in other fallbacks); update that returned
dict (in the parse failure branch inside the interpreter function) to include
"install_mode" with the same default used elsewhere (e.g., None or "unknown") so
downstream code does not raise a KeyError.
- Around line 589-618: Add a docstring to the public method extract_intent
describing the parameter (user_input: str), the return value (dict with keys
like "action", "domain", "install_mode", "description", "ambiguous",
"confidence" representing the parsed intent), and exceptions (raises ValueError
for empty input and for unsupported providers). Also document behavior: it
delegates to provider-specific helpers (_extract_intent_openai,
_extract_intent_claude, _extract_intent_ollama), handles APIProvider.FAKE by
optionally loading JSON from the CORTEX_FAKE_INTENT env var and falling back to
a realistic fake intent, and include any relevant side effects or expectations
(e.g., JSON decoding may be attempted for the env var).
- Around line 224-232: The Ollama intent extraction fallback return value is
missing the install_mode key; update the except block in the Ollama
intent-extraction function (the except that currently returns
{"action":"unknown","domain":"unknown","description":"Failed to extract
intent","ambiguous":True,"confidence":0.0}) to include "install_mode": "unknown"
so it matches other provider fallbacks (e.g., the _parse_intent_from_text
fallback) and keeps the response shape consistent.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
44-46: Inconsistent Ollama base URL handling.
self.ollama_urlis set at line 46 but_initialize_client(line 101) and_call_ollama(line 378) readOLLAMA_BASE_URLagain instead of reusing the stored attribute. This creates inconsistency—if the env var were to change, different parts of the code would use different values.Consider consolidating to always use
self.ollama_url:♻️ Suggested consolidation
ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") self.client = OpenAI( api_key="ollama", - base_url=f"{ollama_base_url}/v1", # Dummy key, not used + base_url=f"{self.ollama_url}/v1", # Dummy key, not used )And similarly at line 378:
- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + ollama_base_url = self.ollama_urlAlso applies to: 101-104
218-232: Overly broad exception handling loses diagnostic information.The bare
except Exceptionat line 224 catches all exceptions, includingKeyboardInterruptand programming errors likeAttributeError. The actual error is silently discarded, making debugging difficult.Consider catching more specific exceptions and logging the error:
♻️ Suggested improvement
- except Exception: - # True failure → unknown intent + except (urllib.error.URLError, json.JSONDecodeError, KeyError) as e: + # Log for debugging, return unknown intent + import sys + print(f"Debug: Intent extraction failed: {e}", file=sys.stderr) return { "action": "unknown", "domain": "unknown", - "description": "Failed to extract intent", + "description": f"Failed to extract intent: {str(e)}", "ambiguous": True, "confidence": 0.0, }
557-565:parse_with_contextdoesn't support the newdomainparameter.The
parsemethod now accepts an optionaldomainparameter, butparse_with_contextdoesn't expose this capability. If domain-aware parsing is needed with system context, callers would need to useparsedirectly.Consider whether this method should also support domain:
♻️ Suggested enhancement
def parse_with_context( - self, user_input: str, system_info: dict[str, Any] | None = None, validate: bool = True + self, user_input: str, system_info: dict[str, Any] | None = None, validate: bool = True, domain: str | None = None ) -> list[str]: context = "" if system_info: context = f"\n\nSystem context: {json.dumps(system_info)}" enriched_input = user_input + context - return self.parse(enriched_input, validate=validate) + return self.parse(enriched_input, validate=validate, domain=domain)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (3)
cortex/llm/interpreter.py (3)
112-187: LGTM on the domain-aware system prompt structure.The expanded
_get_system_promptmethod cleanly integrates domain context and provides comprehensive guidance for command generation. The Python package handling rules (lines 143-148) are particularly valuable for ensuring correct package manager selection.
142-143: The original concern is already addressed in the codebase. The cli.py implementation includes explicit user confirmation prompts before executing any sudo commands, even though the interpreter.py system prompt may instruct the LLM to add sudo for system-level operations.Specifically, cli.py enforces user acknowledgment at multiple checkpoints:
- Ownership reclamation operations prompt: "Reclaim ownership using sudo? (y/n)" (line 316-317)
- Command execution requires user confirmation: "Execute edited commands? [y/n]" (line 1064)
- The code explicitly implements the safety standard: "No silent sudo execution" (line 306)
The architecture correctly separates concerns: the interpreter generates candidate commands, and the CLI layer ensures user approval before any privileged operations run. This satisfies the coding guideline requirement for explicit privilege escalation confirmation.
Likely an incorrect or invalid review comment.
496-555: Clarify audit logging responsibility for the parse() method.The
parse()method generates commands but doesn't log to~/.cortex/history.db. Audit logging currently happens in the CLI layer viahistory.record_installation(), but only whenexecuteordry_runisTrue. Per the coding guidelines ("Log all operations to ~/.cortex/history.db for audit purposes"), clarify whether pure parsing operations should have their own audit trail, or if logging only executed operations is the intended design. Sinceparse()is called in multiple contexts (demo code inpackages.py, internal calls), a centralized approach at the method level may be needed to ensure all parsing operations are auditable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cortex/llm/interpreter.py (3)
294-296: Missing defensive null check for API response content.
response.choices[0].message.contentcan beNonein edge cases, which would cause.strip()to raise anAttributeError. The pattern incortex/ask.py(lines 232-235) uses defensive handling:try: content = response.choices[0].message.content or "" except (IndexError, AttributeError): content = ""Consider applying the same pattern here for consistency and robustness.
365-367: Same defensive handling gap as in_call_openai.Line 366 accesses
response.content[0].textwithout defensive handling. Apply the same pattern used incortex/ask.py(lines 249-252) for consistency.
474-492: Logic error: dict-format command handling is unreachable.The early return on lines 474-475 filters commands to only include strings. If the LLM returns object-format commands like
[{"command": "cmd1"}], this block returns an empty list[]instead of falling through to the dict handling logic on lines 480-490, which becomes dead code.🐛 Proposed fix
commands = data.get("commands", []) - if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - - # Handle both formats: - # 1. ["cmd1", "cmd2"] - direct string array - # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array - result = [] - for cmd in commands: - if isinstance(cmd, str): - # Direct string - if cmd: - result.append(cmd) - elif isinstance(cmd, dict): - # Object with "command" key - cmd_str = cmd.get("command", "") - if cmd_str: - result.append(cmd_str) - - return result + if not isinstance(commands, list): + raise ValueError("commands must be a list") + + # Handle both formats: + # 1. ["cmd1", "cmd2"] - direct string array + # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array + result = [] + for cmd in commands: + if isinstance(cmd, str): + if cmd.strip(): + result.append(cmd) + elif isinstance(cmd, dict): + cmd_str = cmd.get("command", "") + if cmd_str: + result.append(cmd_str) + + return result
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 624-653: The public method extract_intent lacks a docstring; add a
clear docstring above the extract_intent definition that briefly describes its
purpose (extracting intent from user_input), lists parameters (user_input: str),
describes the return type/dictionary structure (keys like action, domain,
install_mode, description, ambiguous, confidence or provider-specific behavior),
and documents possible exceptions (ValueError for empty input or unsupported
provider, JSONDecodeError handling for CORTEX_FAKE_INTENT if applicable).
Mention provider-specific branches (OPENAI, CLAUDE, OLLAMA, FAKE) and any
environment behavior (CORTEX_FAKE_INTENT), and keep it concise following project
docstring style.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
44-46: Inconsistent usage ofself.ollama_url.The Ollama URL is stored in
self.ollama_urlhere, but_initialize_client(line 101) and_call_ollama(line 401) read from the environment variable again instead of using this stored value. Consider usingself.ollama_urlconsistently throughout the class.♻️ Suggested fix in _initialize_client
- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") self.client = OpenAI( api_key="ollama", - base_url=f"{ollama_base_url}/v1", # Dummy key, not used + base_url=f"{self.ollama_url}/v1", # Dummy key, not used )
346-352: Fallback dict missinginstall_modekey.The fallback return is missing the
install_modefield, which other intent extraction fallbacks (e.g., lines 315-322) include. Whileparse()uses.get("install_mode", "system")as a fallback, adding it here ensures consistent structure.♻️ Suggested fix
return { "action": "unknown", "domain": "unknown", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, + "install_mode": "system", }
228-242: Consider logging the exception before returning unknown intent.The broad
except Exceptioncatch silently swallows errors, making debugging difficult. Per coding guidelines ("Log all operations to ~/.cortex/history.db for audit purposes"), consider logging the error before returning the fallback.♻️ Suggested improvement
+ except urllib.error.URLError as e: + # Network/connection error - log for debugging + import logging + logging.debug(f"Ollama intent extraction failed: {e}") + return { + "action": "unknown", + ... + } except Exception: - # True failure → unknown intent + # Unexpected failure - log for debugging + import logging + logging.debug(f"Ollama intent extraction failed unexpectedly") return {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/ask.py (3)
_get_system_prompt(209-221)_call_openai(223-238)_call_claude(240-253)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
cortex/llm/interpreter.py (3)
112-197: Well-structured prompt construction with domain-aware context.The updated
_get_system_promptmethod cleanly handles domain context and install mode constraints. Thecommon_rulesblock effectively enforces pip-over-apt for Python packages, which aligns with the PR objectives for deterministic NL→command generation.
244-274: Clear intent extraction prompt with well-defined confidence scoring.The prompt provides clear guidelines for confidence scoring based on domain identifiability rather than package specificity, which should improve handling of ambiguous requests per the PR objectives.
537-570: Solid intent-aware command parsing with proper fallback.The try-except wrapper around intent extraction ensures graceful degradation to
install_mode="system"if extraction fails. Domain and install_mode are correctly propagated to all provider calls, maintaining consistency across OpenAI, Claude, and Ollama paths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
399-400: Add defensiveNonecheck for content.Same issue as
_extract_intent_openai:contentcould beNone, causingAttributeErroron.strip().🐛 Proposed fix
- content = response.choices[0].message.content.strip() + content = (response.choices[0].message.content or "").strip()
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 313-314: The code risks AttributeError by calling .strip() on
response.choices[0].message.content if that content is None; update the
assignment before calling _parse_intent_from_text to use a safe fallback like
(response.choices[0].message.content or "").strip() so content is always a
string, then pass that string into self._parse_intent_from_text; locate this
change where content = response.choices[0].message.content.strip() occurs (and
adjust any equivalent occurrences in the same method).
- Around line 614-615: The code assumes response.content[0].text is a str and
calls .strip(), but it can be None; update the caller (the block that sets
content and calls self._parse_intent_from_text) to defensively handle None by
checking response.content[0].text before stripping—e.g., assign raw_text =
response.content[0].text or "" (or return/raise if None is invalid), then do
content = raw_text.strip() and pass that to self._parse_intent_from_text; update
any relevant variable names in this method to use raw_text/content accordingly.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
44-46: Consolidate Ollama base URL handling to avoid inconsistency.
self.ollama_urlis set here but_call_ollama(line 403) reads fromos.environ.get()again, and_initialize_client(line 101) also reads it independently. This creates three separate reads that could diverge if the environment changes.Consider using
self.ollama_urlconsistently throughout the class.♻️ Suggested consolidation
def _call_ollama( self, user_input: str, domain: str | None = None, install_mode: str | None = None ) -> list[str]: ... except Exception as e: # Provide helpful error message - ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + ollama_base_url = self.ollama_url raise RuntimeError(And in
_initialize_client:elif self.provider == APIProvider.OLLAMA: ... - ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + ollama_base_url = self.ollama_url self.client = OpenAI(
337-342: Consider validatingdescriptionfield for consistency.The validation checks for 5 keys but omits
description, which is present in the fallback return (line 350) and in the intent prompt's response format. This could lead to parsed intents missing the description field.♻️ Suggested fix
# Minimal validation (structure only) - for key in ["action", "domain", "install_mode", "ambiguous", "confidence"]: + for key in ["action", "domain", "install_mode", "ambiguous", "confidence", "description"]: if key not in parsed: raise ValueError("Missing intent field")
539-548: Consider logging intent extraction failures.The bare
except Exceptionsilently falls back toinstall_mode = "system". While graceful degradation is good, logging the failure would aid debugging without changing behavior.Per coding guidelines, operations should be logged for audit purposes.
♻️ Suggested improvement
try: intent = self.extract_intent(user_input) install_mode = intent.get("install_mode", "system") if not domain: domain = intent.get("domain", "unknown") if domain == "unknown": domain = None - except Exception: + except Exception as e: + import logging + logging.debug(f"Intent extraction failed, defaulting to system mode: {e}") install_mode = "system"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/ask.py (3)
_get_system_prompt(209-221)_call_openai(223-238)_call_claude(240-253)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (7)
cortex/llm/interpreter.py (7)
150-153: Sudo usage may conflict with explicit confirmation requirement.The prompt instructs the LLM to "Add sudo only for system-level commands" (line 152), but per the coding guidelines, silent sudo should not be used—explicit user confirmation is required for privilege escalation.
Ensure downstream execution flow (in CLI or executor) prompts the user before running any sudo commands. If that's already handled, consider adding a note here for clarity.
199-243: LGTM with minor note on API inconsistency.The method uses Ollama's native
/api/generateendpoint withurllib.request, while_call_ollamauses the OpenAI-compatible/v1/chat/completionsvia theself.client. This works but is an architectural inconsistency.The 60-second timeout and fallback-on-error pattern are reasonable for intent extraction.
245-275: LGTM!The intent prompt is well-structured with clear confidence scoring guidelines and JSON format specification.
277-299: LGTM!Clean propagation of domain/install_mode context to the system prompt with defensive content handling.
356-371: LGTM!Clean parameter propagation with consistent defensive content handling.
437-500: LGTM!The enhanced parsing with JSON isolation, whitespace stripping, and multiple format handling is robust. The debug output to stderr is helpful for troubleshooting malformed responses.
626-673: LGTM!The public
extract_intentmethod has comprehensive documentation per coding guidelines, proper input validation, and clean provider routing. The FAKE provider's configurable intent via environment variable is good for testing flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
399-400: Missing null check oncontentbefore calling.strip().Unlike line 296 which uses
(response.choices[0].message.content or "").strip(), this line directly calls.strip()without handlingNone. IfcontentisNone, this will raiseAttributeError.🔧 Suggested fix
- content = response.choices[0].message.content.strip() + content = (response.choices[0].message.content or "").strip()
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 604-624: The _extract_intent_claude method lacks a docstring and a
defensive check for response.content indexing which can raise IndexError; add a
short docstring explaining the method, then guard access to
response.content[0].text by checking that response.content exists and has at
least one element (or using getattr/len) and defaulting content to an empty
string when absent before calling _parse_intent_from_text; keep the existing
exception fallback intact and reference the method name _extract_intent_claude
and the response.content[0].text access when making the change.
- Around line 296-297: The code in Interpreter._parse_commands call site assumes
response.choices[0] exists and can raise IndexError when choices is empty; add a
defensive check on response.choices (e.g., if not response.choices or not
response.choices[0].message) before accessing it, and handle the case by logging
or raising a clear error or returning an empty command list instead of indexing
into an empty list; update the block that currently does content =
(response.choices[0].message.content or "").strip() and the subsequent return
self._parse_commands(content) to perform this validation and a safe fallback.
- Around line 368-369: Add a defensive check for an empty response.content
before indexing into it: if response.content is empty, set content to an empty
string (or other appropriate default) and then call
self._parse_commands(content); otherwise keep the existing behavior of using
(response.content[0].text or "").strip(). Ensure you reference response.content
and _parse_commands in the fix so the code never does response.content[0] when
the list is empty.
🧹 Nitpick comments (2)
cortex/llm/interpreter.py (2)
44-46: Inconsistent usage ofself.ollama_url.The instance variable is set here but
_initialize_client(line 101) and the error message in_call_ollama(line 403) read fromos.environagain instead of usingself.ollama_url. This creates inconsistency and duplicated logic.♻️ Suggested fix
In
_initialize_client(around line 101):- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") - self.client = OpenAI( - api_key="ollama", - base_url=f"{ollama_base_url}/v1", - ) + self.client = OpenAI( + api_key="ollama", + base_url=f"{self.ollama_url}/v1", + )In
_call_ollamaerror handling (around line 403):- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") raise RuntimeError( f"Ollama API call failed. Is Ollama running? (ollama serve)\n" - f"URL: {ollama_base_url}, Model: {self.model}\n" + f"URL: {self.ollama_url}, Model: {self.model}\n" f"Error: {str(e)}" )
199-243: Add docstring and consider more specific exception handling.This method lacks a docstring. While it's a private method, the coding guidelines recommend docstrings for clarity. Additionally, catching bare
Exception(line 234) swallows all errors including programming errors; consider catching more specific exceptions likeurllib.error.URLError,json.JSONDecodeError, andTimeoutError.📝 Suggested improvements
def _extract_intent_ollama(self, user_input: str) -> dict: + """Extract intent using local Ollama instance. + + Args: + user_input: Natural language description of desired action + + Returns: + Dict with intent fields or fallback unknown intent on failure + """ import urllib.error import urllib.request- except Exception: + except (urllib.error.URLError, urllib.error.HTTPError, json.JSONDecodeError, TimeoutError): # True failure → unknown intent
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (6)
cortex/llm/interpreter.py (6)
112-197: Well-structured domain-aware prompt generation.The enhanced prompt system correctly incorporates domain context and install mode constraints. The explicit Python package handling rules (using pip instead of apt for Python packages) align well with the PR objectives for ML-style installations.
245-275: LGTM - well-designed intent extraction prompt.The prompt clearly defines confidence scoring tiers, rules, and response format. Consider adding a brief docstring for consistency.
325-354: LGTM - robust intent parsing with sensible fallback.The method correctly attempts JSON extraction and validates required fields, falling back to an unknown intent structure on failure. This is appropriate for handling varied LLM output formats.
437-500: Good robustness improvements to command parsing.The enhanced parsing logic correctly handles multiple JSON formats, strips empty commands, and includes helpful debug output on failure. The dual format support (string array and object array) is well implemented.
539-548: LGTM - graceful fallback on intent extraction failure.The try/except block appropriately falls back to
"system"install mode if intent extraction fails, ensuring the parse method remains functional even when intent extraction encounters issues. This aligns with the retrieved learning that ambiguity handling occurs at the interpreter layer.
626-673: Well-documented public API for intent extraction.The
extract_intentmethod has a comprehensive docstring per coding guidelines, properly routes to provider-specific implementations, and includes useful FAKE provider support for testing viaCORTEX_FAKE_INTENTenvironment variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
405-406: PotentialAttributeErrorif content isNone.Line 405 calls
.strip()directly onresponse.choices[0].message.contentwithout a null check. Compare with_call_openai(line 298) which safely handles this:(response.choices[0].message.content or "").strip().🐛 Suggested fix
- content = response.choices[0].message.content.strip() + content = (response.choices[0].message.content or "").strip() + if not content: + raise RuntimeError("Ollama returned empty response")
🤖 Fix all issues with AI agents
In @cortex/llm/interpreter.py:
- Around line 329-358: In _parse_intent_from_text, the validation loop checks
for required keys but omits "description", so a parsed JSON missing that key
will be returned and break downstream consumers; update the validation to
include "description" (or after json.loads ensure
parsed.setdefault("description", "Unstructured intent output")) so that parsed
contains a description key with a sensible default consistent with the fallback
dict before returning parsed.
🧹 Nitpick comments (3)
cortex/llm/interpreter.py (3)
44-46: Inconsistentollama_urlusage across methods.The
ollama_urlis defensively initialized here in__init__, but_initialize_client(line 101) and_call_ollama(line 409) re-read fromos.environ.get("OLLAMA_BASE_URL", ...)instead of usingself.ollama_url. This creates potential inconsistency if the environment variable changes after initialization.Consider using
self.ollama_urlconsistently:♻️ Suggested fix
def _initialize_client(self): if self.provider == APIProvider.OPENAI: # ... existing code ... elif self.provider == APIProvider.OLLAMA: # Ollama uses OpenAI-compatible API try: from openai import OpenAI - ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") self.client = OpenAI( api_key="ollama", - base_url=f"{ollama_base_url}/v1", # Dummy key, not used + base_url=f"{self.ollama_url}/v1", # Dummy key, not used )Also update
_call_ollamaerror handling (line 409) to useself.ollama_url.Also applies to: 101-104
228-243: Consider logging the exception for debugging purposes.The broad
except Exceptionfallback is acceptable for graceful degradation, but silently swallowing the error makes debugging difficult when Ollama connectivity issues occur.♻️ Suggested improvement
- except Exception: + except Exception as e: # True failure → unknown intent + import sys + print(f"Debug: Intent extraction failed: {e}", file=sys.stderr) return { "action": "unknown",
545-554: Consider logging intent extraction failures.The exception at line 553 silently falls back to
install_mode = "system". While the fallback is safe, logging the failure would help diagnose issues when intent extraction unexpectedly fails.♻️ Suggested improvement
except Exception: + import sys + print("Debug: Intent extraction failed, defaulting to system install mode", file=sys.stderr) install_mode = "system"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (7)
cortex/llm/interpreter.py (7)
112-197: LGTM!The enhanced system prompt with domain context and install mode constraints is well-structured. The explicit rules for Python package handling via pip (lines 153-158) and the common rules block provide clear guidance to the LLM for generating appropriate commands.
245-275: LGTM!The intent extraction prompt is well-designed with clear confidence scoring guidelines (HIGH/MEDIUM/LOW ranges) and explicit JSON response format requirements.
277-327: LGTM!Good defensive checks added for empty responses (line 296-298) and consistent error handling in
_extract_intent_openaiwith graceful fallback to unknown intent.
360-377: LGTM!Consistent defensive check for empty response and proper propagation of
domainandinstall_modeto the system prompt.
443-506: LGTM!The enhanced
_parse_commandsmethod handles multiple output formats robustly, including strict JSON and loose Ollama outputs. The added.strip()calls (lines 492-493, 497-498) ensure clean command strings.
610-640: LGTM!Well-documented method with proper docstring, defensive empty response handling, and consistent error fallback pattern.
642-689: LGTM!Comprehensive public API with thorough docstring documenting all return fields. The FAKE provider's configurable intent via
CORTEX_FAKE_INTENTenvironment variable is a nice touch for testing flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
407-410: PotentialAttributeErrorif content isNone.Line 409 calls
.strip()directly onresponse.choices[0].message.contentwithout a null check. If the LLM returnsNonecontent, this will raise anAttributeError. The OpenAI method (line 298) handles this with(content or "").strip().🐛 Suggested fix
- content = response.choices[0].message.content.strip() + content = (response.choices[0].message.content or "").strip()
🧹 Nitpick comments (4)
cortex/llm/interpreter.py (4)
44-46: Inconsistent Ollama base URL access pattern.
self.ollama_urlis set here but only used in_extract_intent_ollama(line 223). Meanwhile,_initialize_client(line 101) and_call_ollama(line 413) readOLLAMA_BASE_URLfrom the environment directly. Consider consolidating to always useself.ollama_urlfor consistency.♻️ Suggested consolidation
def _call_ollama( self, user_input: str, domain: str | None = None, install_mode: str | None = None ) -> list[str]: """Call local Ollama instance using OpenAI-compatible API.""" try: # ...existing code... except Exception as e: - ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + ollama_base_url = self.ollama_url raise RuntimeError( f"Ollama API call failed. Is Ollama running? (ollama serve)\n" f"URL: {ollama_base_url}, Model: {self.model}\n" f"Error: {str(e)}" )
199-243: Add return type hint and consider logging errors.The method lacks a return type annotation. Additionally, the broad
except Exception(line 234) silently swallows all errors including network failures. Consider logging the exception for debugging purposes, especially since Ollama reliability is called out in the PR comments.♻️ Suggested improvement
- def _extract_intent_ollama(self, user_input: str) -> dict: + def _extract_intent_ollama(self, user_input: str) -> dict[str, Any]: import urllib.error import urllib.request + import sys # ... existing code ... try: with urllib.request.urlopen(req, timeout=60) as response: raw = json.loads(response.read().decode("utf-8")) text = raw.get("response", "") return self._parse_intent_from_text(text) - except Exception: + except Exception as e: # True failure → unknown intent + print(f"Debug: Ollama intent extraction failed: {e}", file=sys.stderr) return { "action": "unknown",
245-275: Add return type hint for consistency.Minor: The method should include a return type annotation for consistency with other methods in the class.
- def _get_intent_prompt(self) -> str: + def _get_intent_prompt(self) -> str:Actually, the return type
-> stris already present. LGTM!
549-558: Silent fallback on intent extraction failure.The broad
except Exceptionat line 557 silently falls back toinstall_mode="system". While this provides resilience, consider logging a warning so users are aware when intent extraction fails and defaults are used.♻️ Optional: Add debug logging
try: intent = self.extract_intent(user_input) install_mode = intent.get("install_mode", "system") if not domain: domain = intent.get("domain", "unknown") if domain == "unknown": domain = None - except Exception: + except Exception as e: + import sys + print(f"Debug: Intent extraction failed, using defaults: {e}", file=sys.stderr) install_mode = "system"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/llm/interpreter.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.817Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.817Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/llm/interpreter.py
🧬 Code graph analysis (1)
cortex/llm/interpreter.py (1)
cortex/ask.py (3)
_get_system_prompt(209-221)_call_openai(223-238)_call_claude(240-253)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (8)
cortex/llm/interpreter.py (8)
137-159: Well-structured system prompt with clear rules.The common rules block effectively guides the LLM on POSIX compliance, Python package handling, and safety constraints. The emphasis on using
pipfor Python packages (lines 153-158) directly addresses the PR objective for proper install-mode awareness.Regarding line 152 ("Add sudo only for system-level commands"): per coding guidelines, sudo should require explicit user confirmation. Verify that the execution layer prompts for confirmation before running sudo commands—this prompt instructs generation, not execution.
277-301: Good defensive handling for empty responses.The empty response check at lines 296-297 aligns with the defensive pattern used in
cortex/ask.pyfor handling edge cases in LLM responses.
303-327: Consistent error handling pattern.Good implementation that mirrors the Ollama intent extraction pattern. Including the error message in the fallback description (line 323) aids debugging.
329-362: Robust JSON extraction with appropriate fallback.The parsing logic handles loose LLM output well by locating JSON boundaries and validating required keys. The fallback at lines 354-362 ensures the method never raises, providing a safe default for downstream code.
364-381: Consistent updates for domain/install_mode support.The changes mirror the OpenAI method pattern with proper empty response handling.
447-510: Improved robustness for diverse LLM output formats.The enhanced parsing logic with JSON isolation (lines 474-480) and support for both string and dict command formats addresses the PR objective of handling Ollama's less reliable structured output. The debug output to stderr aids troubleshooting.
614-644: Well-documented intent extraction method.Good implementation with proper docstring following coding guidelines for public API documentation. The pattern is consistent with other provider implementations.
646-693: Comprehensive public API with proper documentation.The
extract_intentmethod has thorough docstring documentation covering all return fields, which satisfies coding guidelines for public APIs. The FAKE provider's support forCORTEX_FAKE_INTENTenvironment variable enables flexible test scenarios.
|
Can you review this? |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 added some comments.
cortex/cli.py
Outdated
|
|
||
| # High confidence: proceed directly | ||
| # Generate natural understanding message | ||
| if _is_interactive() or not _is_interactive(): # Always show understanding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Please address this one as well.
| clarified = input("What would you like to install? ").strip() | ||
| if clarified: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Address this one.
| self.stdin_data = None | ||
| if not sys.stdin.isatty(): | ||
| try: | ||
| self.stdin_data = sys.stdin.read() | ||
| except OSError: | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Address this one as well.
| self.stdin_data = None | ||
| if not sys.stdin.isatty(): | ||
| try: | ||
| self.stdin_data = sys.stdin.read() | ||
| except OSError: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 Address this one as well.
…53/cortex into nl-parser-feature
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
383-410: Guard against empty choices from the OpenAI-compatible Ollama client.
response.choices[0]can raiseIndexErrorif Ollama returns an empty list. Add defensive checks consistent with_call_openai()and_call_claude()methods.🛡️ Proposed fix
response = self.client.chat.completions.create( model=self.model, messages=[ { "role": "system", "content": self._get_system_prompt( simplified=True, domain=domain, install_mode=install_mode ), }, {"role": "user", "content": enhanced_input}, ], temperature=0.1, # Lower temperature for more focused responses max_tokens=300, # Reduced tokens for faster response ) + if not response.choices: + raise RuntimeError("Ollama returned empty response") - content = response.choices[0].message.content.strip() + content = (response.choices[0].message.content or "").strip() return self._parse_commands(content)
♻️ Duplicate comments (7)
cortex/llm/interpreter.py (1)
44-46: Use the stored Ollama URL consistently.Now that
self.ollama_urlis initialized,_initialize_client()should reuse it to avoid divergence if env changes after init.♻️ Proposed fix
- ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") self.client = OpenAI( api_key="ollama", - base_url=f"{ollama_base_url}/v1", # Dummy key, not used + base_url=f"{self.ollama_url}/v1", )cortex/cli.py (6)
924-1025: Avoid double LLM calls by reusing extracted intent.
extract_intent()is called here, thenparse()calls it again internally. Consider passingdomain(and adding aninstall_modeparam) toparse()to skip the second LLM call.♻️ Suggested direction
- commands = interpreter.parse(prompt) + commands = interpreter.parse(prompt, domain=domain, install_mode=install_mode) # after adding paramThis likely requires extending
CommandInterpreter.parse()to acceptinstall_modeand skipextract_intent()when supplied.
55-62: Bound stdin reads to avoid blocking/large memory spikes.
sys.stdin.read()is unbounded and can hang or consume large input. Add a size cap (and optionally note truncation).🛡️ Proposed fix
self.stdin_data = None if not sys.stdin.isatty(): try: - self.stdin_data = sys.stdin.read() + # Limit stdin to 100KB to avoid blocking or large memory usage + MAX_STDIN_SIZE = 100 * 1024 + self.stdin_data = sys.stdin.read(MAX_STDIN_SIZE) except Exception: # Silently ignore any stdin reading errors (OSError, UnicodeDecodeError, etc.) # stdin_data remains None and will be handled gracefully downstream pass
102-171: Provider comparison bug causes all LLM helper calls to fall back.
interpreter.provider.nameis"OPENAI" | "CLAUDE" | ..., so comparing to lowercase strings never matches. Also, exceptions are swallowed silently, which masks real failures.🐛 Proposed fix
-from cortex.llm.interpreter import CommandInterpreter +from cortex.llm.interpreter import CommandInterpreter, APIProvider @@ - if interpreter.provider.name == "openai": + if interpreter.provider == APIProvider.OPENAI: response = interpreter.client.chat.completions.create( model=interpreter.model, messages=[ {"role": "system", "content": system_message}, {"role": "user", "content": prompt}, ], temperature=temperature, max_tokens=max_tokens, ) return response.choices[0].message.content.strip() - elif interpreter.provider.name == "claude": + elif interpreter.provider == APIProvider.CLAUDE: response = interpreter.client.messages.create( model=interpreter.model, max_tokens=max_tokens, temperature=temperature, system=system_message, messages=[{"role": "user", "content": prompt}], ) return response.content[0].text.strip() - elif interpreter.provider.name == "ollama": + elif interpreter.provider == APIProvider.OLLAMA: # Defensive check: ollama_url only exists if provider is OLLAMA if not hasattr(interpreter, "ollama_url"): return fallback @@ - elif interpreter.provider.name == "fake": + elif interpreter.provider == APIProvider.FAKE: return fallback - except Exception: - pass + except Exception as e: + if self.verbose: + import sys + print(f"[DEBUG] LLM helper failed: {e}", file=sys.stderr) + pass
237-282: Suggestion parsing fails for multi‑digit list items and short lines.
line[1:3]breaks on “10. …” and short lines can raiseIndexError.🛠️ Proposed fix
suggestions = [] lines = content.split("\n") + import re for line in lines: line = line.strip() - if line and line[0].isdigit() and line[1:3] in [". ", ") "]: - suggestion = line.split(". ", 1)[-1].split(") ", 1)[-1].strip() - if suggestion: - suggestions.append(suggestion) + match = re.match(r"^\d+[.)]\s*(.+)$", line) + if match: + suggestion = match.group(1).strip() + if suggestion: + suggestions.append(suggestion)
945-951: Wrap interactiveinput()calls for EOF/interrupt safety.These inputs can raise
EOFError/KeyboardInterruptand should exit gracefully like other CLI prompts.🛡️ Proposed fix (pattern)
- clarified = input("What would you like to install? ").strip() + try: + clarified = input("What would you like to install? ").strip() + except (EOFError, KeyboardInterrupt): + console.print() + cx_print("Operation cancelled", "warning") + return 1Apply the same guard to the other
input()calls in this block.Also applies to: 980-981, 1060-1075, 1089-1091
1049-1055: Non‑interactive auto‑approval can bypass explicit sudo confirmation.Auto‑approving in non‑interactive mode means privileged commands can run without explicit confirmation. Consider requiring a dedicated
--yes(or similar) flag before executing with sudo in non‑interactive contexts.As per coding guidelines, avoid silent sudo execution.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
📚 Learning: 2026-01-05T20:26:18.626Z
Learnt from: pavanimanchala53
Repo: cortexlinux/cortex PR: 439
File: cortex/cli.py:62-73
Timestamp: 2026-01-05T20:26:18.626Z
Learning: In cortex/cli.py, the _is_ambiguous_request method intentionally only checks for domain=="unknown" and does not recheck the ambiguous field from the intent dict. The design assumes that ambiguity handling occurs at the interpreter layer through LLM prompting, and the domain check in CLI is sufficient as a safety mechanism.
Applied to files:
cortex/cli.pycortex/llm/interpreter.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Do not use silent sudo - require explicit user confirmation for privilege escalation
Applied to files:
cortex/cli.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Add docstrings for all public APIs in Python code
Applied to files:
cortex/cli.pycortex/llm/interpreter.py
🔇 Additional comments (17)
cortex/llm/interpreter.py (11)
112-197: Prompt constraints are clear and well-scoped.Domain and install-mode constraints look tight and should reduce cross-domain command leakage.
199-243: Ollama intent extraction path looks solid.Timeouts and fallback shape are consistent with other providers.
245-275: Intent prompt formatting is consistent.The JSON schema guidance and confidence tiers are clear.
277-299: OpenAI call wiring looks good.System prompt now carries domain/install_mode correctly.
303-327: OpenAI intent extraction flow is tidy.Fallback shape matches the schema and is consistent.
329-362: Intent parsing safeguards look good.Defaulting
descriptionavoids schema drift downstream.
364-381: Claude call uses the updated prompt correctly.Nice to see consistent empty-response handling.
447-512: Command parsing improvements are robust.The JSON repair + strict parsing path is a good hardening step.
532-604: Intent-driven domain/install_mode propagation is sensible.Nice to see system prompt caching incorporate those constraints.
615-645: Claude intent extraction looks consistent.Schema-aligned fallback is a good safety net.
647-694: Public intent API is clear and well-documented.The docstring and provider routing read cleanly.
cortex/cli.py (6)
41-47: Interactive check helper is clean.Type hint + docstring improve readability.
64-73: Stdin context composition is straightforward.Nice, minimal prompt merge.
75-100: No additional issues to flag here.
173-198: Understanding-message helper reads well.Fallbacks are sensible and aligned with domain hints.
199-221: Clarifying-question helper looks good.Clear prompt and safe fallback.
222-235: Clarification request helper is fine.Consistent tone and defaults.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.




Related Issue
Closes #248
Summary
This PR improves the natural-language install flow by standardizing prompt
construction and ensuring deterministic command generation without hardcoded
package mappings.
What changed
Testing
Added
tests/test_nl_installer.pyVerified command generation without execution, sudo, or network access
All tests pass via
pytestNotes
--executeAI Used
working video
with --execute flag
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.