-
-
Notifications
You must be signed in to change notification settings - Fork 50
fix: skip ollama connection when not configured #316 #437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: skip ollama connection when not configured #316 #437
Conversation
📝 WalkthroughWalkthroughDoctor, LLM router, and CLI now gate Ollama usage: Ollama checks and client init run only when Ollama is explicitly configured (via Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI as CLI
participant Env as Env/Config
participant Doctor as doctor.py
participant Router as llm_router.py
participant Ollama as Ollama Service
participant Cloud as Cloud Provider
CLI->>Env: read CORTEX_PROVIDER / OLLAMA_BASE_URL / API keys
CLI->>Doctor: run health checks
Doctor->>Env: check CORTEX_PROVIDER / OLLAMA_BASE_URL
alt Ollama configured
Doctor->>Ollama: GET /api/tags (base URL from param/env or fallback)
Ollama-->>Doctor: tags / OK
else Ollama not configured
Doctor-->>CLI: INFO: skip Ollama check
end
CLI->>Router: request completion (provider from Env/args)
Router->>Env: determine ollama_enabled and base URL
alt provider == "ollama" and ollama_enabled
Router->>Ollama: send completion request
Ollama-->>Router: response
Router-->>CLI: return Ollama response
else provider == "ollama" and not enabled
Router-->>CLI: raise ValueError("Unsupported provider: ...")
else provider != "ollama"
Router->>Cloud: call external provider (OpenAI/Anthropic)
Cloud-->>Router: response
Router-->>CLI: return response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
cortex/llm_router.py (1)
185-202: Conditional initialization achieves PR objective.The guard correctly prevents Ollama client initialization when not configured, replacing the previous warning with an informational message. This improves the user experience for cloud-only users.
Minor suggestions for improved diagnostics:
- Line 196: Consider including the URL in the success message for debugging:
logger.info("✅ Ollama client initialized (%s at %s)", self.ollama_model, self.ollama_base_url)- Line 199: The message "connection failed" is slightly misleading since OpenAI client initialization doesn't immediately connect to the server. Consider:
logger.warning("⚠️ Ollama enabled but client initialization failed: %s", e)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/doctor.pycortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/doctor.py
🧬 Code graph analysis (2)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
cortex/doctor.py (1)
cortex/validators.py (1)
validate_api_key(58-86)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (3)
cortex/doctor.py (3)
294-303: Guard logic correctly prevents unnecessary checks.The conditional logic appropriately skips Ollama diagnostics when neither
CORTEX_PROVIDER=ollamanorOLLAMA_BASE_URLis configured. The use of INFO-level logging for the skip case is better than the previous warning, improving UX for cloud-only users.Note: Empty strings are handled correctly here since
not ollama_urlevaluates toTruefor"", causing the check to be skipped.
318-320: URL fallback logic is sound.The use of
ollama_url or "http://localhost:11434"correctly falls back to localhost when the configured URL is empty or None.
334-334: Idiomatic use of underscore for unused value.Replacing the unused
error_messagevariable with_follows Python conventions for explicitly ignoring tuple unpacking values.
|
@Anshgrover23 can you review my PR. |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kesavaraja67
Please test your implementation locally before requesting a review.
You also need to update the default / fallback provider logic here:
Line 64 in f18c7dd
| def _get_provider(self) -> str: |
Currently, Ollama is treated as the fallback provider even when no API key is configured, which needs to be addressed.
After making these changes, please record a short demo video using the cortex CLI commands. Avoid running the Python files directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
cortex/cli.py (1)
76-85: Good: Prevents unwanted Ollama connection attempts.The logic correctly requires explicit Ollama configuration (
OLLAMA_BASE_URLorCORTEX_PROVIDER=ollama) before returning"ollama". The fallthrough tosys.exit(1)with a clear error message addresses the PR objective to fail gracefully when no provider is configured.However, calling
sys.exit(1)from within a method that declares return typestris unconventional. Consider raising an exception instead for better testability.🔎 Optional: Raise exception instead of sys.exit for testability
- # 4. If nothing is found, don't just guess "ollama". - # Print a clear message and exit gracefully - self._print_error("No AI provider configured.") - cx_print("Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", "info") - sys.exit(1) # Stop execution here + # 4. If nothing is found, don't just guess "ollama". + raise RuntimeError( + "No AI provider configured. " + "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama." + )The exception can be caught in calling code (like
main()) which already handlesRuntimeError.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/llm_router.py
🧬 Code graph analysis (2)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)cortex/llm/interpreter.py (1)
parse(319-377)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 81-81: ruff check failed: Trailing whitespace (W291) detected in cortex/cli.py:81. Command: 'ruff check . --output-format=github'.
🪛 GitHub Check: lint
cortex/cli.py
[failure] 81-81: Ruff (W291)
cortex/cli.py:81:61: W291 Trailing whitespace
[failure] 601-601: Ruff (W291)
cortex/cli.py:601:61: W291 Trailing whitespace
cortex/llm_router.py
[failure] 176-176: Ruff (W291)
cortex/llm_router.py:176:43: W291 Trailing whitespace
[failure] 174-174: Ruff (W293)
cortex/llm_router.py:174:1: W293 Blank line contains whitespace
🪛 GitHub Check: Lint
cortex/cli.py
[failure] 81-81: Ruff (W291)
cortex/cli.py:81:61: W291 Trailing whitespace
[failure] 601-601: Ruff (W291)
cortex/cli.py:601:61: W291 Trailing whitespace
cortex/llm_router.py
[failure] 176-176: Ruff (W291)
cortex/llm_router.py:176:43: W291 Trailing whitespace
[failure] 174-174: Ruff (W293)
cortex/llm_router.py:174:1: W293 Blank line contains whitespace
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/llm_router.py (2)
171-204: Ollama guardrail and initialization logic looks correct.The implementation correctly:
- Checks for explicit configuration via
CORTEX_PROVIDER == "ollama", non-emptyOLLAMA_BASE_URL, or constructor argument- Handles empty strings properly with
bool(ollama_url_env) and ollama_url_env.strip() != ""- Falls back to default URL only when Ollama is enabled
- Logs at INFO level (not WARNING) when Ollama is skipped, avoiding terminal noise
This addresses the PR objective to skip Ollama connection when not configured.
319-320: Good addition of explicit error path for unsupported providers.Adding
raise ValueErrorfor unknown providers improves error handling and makes the code more defensive.
fd3ebc2 to
4ffbad6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
cortex/cli.py (2)
598-603: Duplicate: Hardcoded fake provider command and trailing whitespace.Line 601 has trailing whitespace causing lint failures. Additionally, the hardcoded command
["sudo apt-get install -y nginx"]duplicates logic that should be shared withllm_router.py(lines 311-318). Both issues were already flagged in previous reviews.
81-85: Critical:sys.exit(1)breaks tests and violates separation of concerns.Line 85 calls
sys.exit(1), which terminates the entire process including test runners. The pipeline failure confirms this is causing CI failures. Methods should raise exceptions instead of callingsys.exit()directly—let the caller (e.g.,main()) decide whether to exit.🔎 Proposed fix
# 4. If nothing is found, don't just guess "ollama". # Print a clear message and exit gracefully self._print_error("No AI provider configured.") cx_print("Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", "info") - sys.exit(1) # Stop execution here + raise ValueError("No AI provider configured. Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.")Then ensure callers handle the exception appropriately (e.g., in
main()catch ValueError and exit with code 1).cortex/llm_router.py (2)
171-179: Duplicate: Trailing whitespace on lines 174 and 176.Static analysis reports trailing whitespace on lines 174 and 176, causing pipeline failures. This was already flagged in previous reviews.
308-318: Critical (Duplicate): Hardcoded response overwrites actual Ollama completion.Lines 311-318 overwrite the real Ollama response from line 309 with a hardcoded synthetic
LLMResponsecontaining"sudo apt-get install -y nginx"and provider set toCLAUDE(notOLLAMA). This breaks all Ollama completions and appears to be accidentally committed debug/demo code. This critical issue was already flagged in previous reviews.🔎 Proposed fix - remove hardcoded override
elif routing.provider == LLMProvider.OLLAMA: response = self._complete_ollama(messages, temperature, max_tokens, tools) - # Explicitly return an LLMResponse object so .get() and other attributes work - response = LLMResponse( - content="sudo apt-get install -y nginx", - provider=LLMProvider.CLAUDE, - model="fake-model", - tokens_used=0, - cost_usd=0.0, - latency_seconds=0.1, - ) else: raise ValueError(f"Unsupported provider: {routing.provider}")
🧹 Nitpick comments (1)
cortex/llm_router.py (1)
175-179: Minor: Redundant empty string check.Line 177 checks
ollama_url_env.strip() != ""after line 177 already checksbool(ollama_url_env). For a non-None string,bool("")isFalse, so the strip check is redundant. Consider simplifying to just the truthy check for cleaner code.🔎 Suggested simplification
self.ollama_enabled = ( cortex_provider == "ollama" or - (bool(ollama_url_env) and ollama_url_env.strip() != "") or + bool(ollama_url_env and ollama_url_env.strip()) or bool(ollama_base_url) )This checks truthiness after stripping, combining both checks cleanly.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/cli.py
🧬 Code graph analysis (2)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)cortex/llm/interpreter.py (1)
parse(319-377)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 85-85: _get_provider() called SystemExit(1) because no AI provider is configured (No AI provider configured).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (2)
cortex/llm_router.py (2)
187-204: LGTM: Ollama initialization properly guarded.The conditional initialization of Ollama clients (lines 187-204) correctly gates access behind the
ollama_enabledflag and handles connection failures gracefully with appropriate logging levels (warning when enabled but failing, info when not configured).
324-324: LGTM: Clarifying comment improves code documentation.The added comment on line 324 helpfully documents that stats tracking requires a valid response object, improving code readability.
4ffbad6 to
6128711
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
cortex/llm_router.py (3)
180-182: Handle whitespace-only strings defensively.If
OLLAMA_BASE_URLorollama_base_urlparameter contains only whitespace (e.g.," "), line 182 will store that whitespace string inself.ollama_base_urlbecause whitespace is truthy in Python. While this won't cause immediate issues (sinceollama_enabledcorrectly treats whitespace-only strings as disabled on line 177), it's poor data hygiene.🔎 Proposed fix to normalize whitespace-only strings
# Use the parameter, or the env var, or the default—guaranteeing no empty strings -self.ollama_base_url = ollama_base_url or ollama_url_env or "http://localhost:11434" +base_url_candidate = ollama_base_url or ollama_url_env +self.ollama_base_url = (base_url_candidate.strip() if base_url_candidate and base_url_candidate.strip() else None) or "http://localhost:11434"Or more concisely:
# Use the parameter, or the env var, or the default—guaranteeing no empty strings -self.ollama_base_url = ollama_base_url or ollama_url_env or "http://localhost:11434" +self.ollama_base_url = (ollama_base_url or ollama_url_env or "").strip() or "http://localhost:11434"
316-318: Misleading comment about response validation.The comment "Track stats only if response is a valid object" (line 316) is misleading. The code only checks the
track_costsflag, not whetherresponseis a valid object. If validation of the response was intended, it should be implemented; otherwise, the comment should be corrected.🔎 Suggested fix
Either remove the misleading comment:
-# Track stats only if response is a valid object if self.track_costs: self._update_stats(response)Or add actual validation if it's needed:
-# Track stats only if response is a valid object -if self.track_costs: +if self.track_costs and response: self._update_stats(response)
171-179: Trailing whitespace has been properly cleaned up.No trailing whitespace detected in the Ollama guardrail section—verification passes.
Optional: Simplify redundant
bool()wrappers for consistency.The
bool()wrappers on lines 177-178 are redundant since theoroperator evaluates truthiness directly. Consider removing them for clarity, and apply the same pattern toollama_base_url:Suggested simplification
self.ollama_enabled = ( cortex_provider == "ollama" - or bool(ollama_url_env and ollama_url_env.strip()) - or bool(ollama_base_url) + or (ollama_url_env and ollama_url_env.strip()) + or ollama_base_url )
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm_router.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.py
🧬 Code graph analysis (1)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (1)
cortex/llm_router.py (1)
187-204: LGTM! Conditional initialization aligns with PR objectives.The conditional Ollama initialization correctly implements the guardrail to skip Ollama when not configured. The info-level logging (line 204) appropriately reduces noise for users who don't use local LLMs.
Note on exception handling: The try-except block (lines 199-201) may not catch actual Ollama connection failures, since OpenAI client instantiation typically doesn't connect until the first API call. Actual connection errors will surface later in
_complete_ollama()(line 488), which is acceptable behavior.If you want to verify that connection failures are properly handled, confirm that
_complete_ollama()(lines 444-491) includes appropriate error handling for when Ollama is unreachable.
6128711 to
a74e72d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
cortex/llm_router.py (1)
308-312: Remove leftover development comment.Line 309 contains a development note referencing past review feedback that should be removed.
🔎 Proposed fix
elif routing.provider == LLMProvider.OLLAMA: - # FIX: Remove the hardcoded LLMResponse block that followed this response = self._complete_ollama(messages, temperature, max_tokens, tools) else: raise ValueError(f"Unsupported provider: {routing.provider}")
🧹 Nitpick comments (5)
cortex/llm_router.py (1)
316-318: Misleading comment on stats tracking.The comment "Track stats only if response is a valid object" suggests a validity check, but the code unconditionally calls
_update_stats(response). The comment is misleading since there's no explicit check—the implicit guarantee is that exceptions would prevent reaching this line. Consider removing or clarifying the comment.🔎 Proposed fix
- # Track stats only if response is a valid object if self.track_costs: self._update_stats(response)tests/test_cli.py (2)
29-36: Movepytestimport to module level and consider usingassertRaises.Importing
pytestinside each test method is inefficient. Since this is aunittest.TestCase, you could useself.assertRaisesfor consistency, or move the import to the module level if you preferpytest.raises.Also,
mock_stderris declared but unused in the test body.🔎 Option 1: Use unittest's assertRaises
@patch.dict(os.environ, {}, clear=True) @patch("sys.stderr") - def test_get_api_key_not_found(self, mock_stderr): # Add 'mock_stderr' back here - """When no API key or Ollama is set, it should raise ValueError""" - import pytest - - with pytest.raises(ValueError, match="No AI provider configured"): + def test_get_api_key_not_found(self, mock_stderr): + """When no API key or Ollama is set, it should raise ValueError.""" + with self.assertRaises(ValueError) as context: self.cli._get_api_key() + self.assertIn("No AI provider configured", str(context.exception))🔎 Option 2: Move import to module level
Add at the top of the file:
import pytestThen remove the inline imports from lines 33 and 67.
63-70: Same refactor opportunity astest_get_api_key_not_found.This test has the same pattern of importing
pytestinside the method and an unusedmock_stderrparameter. Apply the same fix as suggested above.🔎 Proposed fix using assertRaises
@patch.dict(os.environ, {}, clear=True) @patch("sys.stderr") - def test_install_no_api_key(self, mock_stderr): # Add 'mock_stderr' back here - """Test that install() raises ValueError when no provider is configured.""" - import pytest - - with pytest.raises(ValueError, match="No AI provider configured"): + def test_install_no_api_key(self, mock_stderr): + """Test that install() raises ValueError when no provider is configured.""" + with self.assertRaises(ValueError) as context: self.cli.install("docker") + self.assertIn("No AI provider configured", str(context.exception))cortex/cli.py (2)
76-78: Redundant condition in Ollama fallback check.The condition
explicit_provider == "ollama"on line 77 is redundant because ifexplicit_providerwere"ollama", the function would have already returned at line 68. This branch can only be reached ifexplicit_provideris not in["ollama", "openai", "claude", "fake"].🔎 Proposed fix
# 3. SMART FALLBACK: Only return "ollama" if it's explicitly set - if os.environ.get("OLLAMA_BASE_URL") or explicit_provider == "ollama": + if os.environ.get("OLLAMA_BASE_URL"): return "ollama"
80-87: Redundant error messaging.The error is communicated twice:
- Line 85:
self._print_error("No AI provider configured.")outputs with "error" styling- Line 86:
cx_print(error_msg, "info")outputs the same message (plus guidance) with "info" stylingConsider consolidating into a single, clear message.
🔎 Proposed fix
# 4. If nothing is found, raise an error instead of sys.exit - error_msg = ( - "No AI provider configured. " - "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama." - ) - self._print_error("No AI provider configured.") - cx_print(error_msg, "info") - raise ValueError(error_msg) + self._print_error("No AI provider configured.") + cx_print( + "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", + "info", + ) + raise ValueError("No AI provider configured")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm_router.pytests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/cli.pytests/test_cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (3)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)
tests/test_cli.py (1)
cortex/cli.py (2)
_get_api_key(45-62)install(550-796)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/llm_router.py (1)
171-204: Ollama guardrail logic looks good.The implementation correctly:
- Checks for explicit
CORTEX_PROVIDER=ollamaor a non-emptyOLLAMA_BASE_URLbefore enabling Ollama- Uses
bool(ollama_url_env and ollama_url_env.strip())to handle empty strings properly- Falls back to the default URL only when neither parameter nor env var provides a value
- Logs at INFO level when Ollama is skipped (avoiding terminal clutter)
This aligns well with the PR objective to skip unnecessary Ollama connection attempts.
cortex/cli.py (1)
76-78: Minor inconsistency in whitespace handling withllm_router.py.
llm_router.pyusesbool(ollama_url_env and ollama_url_env.strip())to handle whitespace-only values, butcli.pyuses a simpleos.environ.get("OLLAMA_BASE_URL")check. If a user setsOLLAMA_BASE_URL=" "(whitespace only),cli.pywould return"ollama"butllm_router.pywould setollama_enabled = False, causing a mismatch.Consider aligning the checks:
# 3. SMART FALLBACK: Only return "ollama" if it's explicitly set - if os.environ.get("OLLAMA_BASE_URL") or explicit_provider == "ollama": + ollama_url = os.environ.get("OLLAMA_BASE_URL", "").strip() + if ollama_url: return "ollama"
a74e72d to
e871844
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/test_cli.py (1)
63-70: Same improvements as test_get_api_key_not_found.This test has the same pattern as
test_get_api_key_not_found:
pytestis imported inside the function (line 67) instead of at module levelmock_stderrparameter is unusedPlease apply the same fixes suggested for the previous test.
🧹 Nitpick comments (3)
cortex/llm_router.py (1)
315-317: Comment could be clearer.The comment on line 315 states "Track stats only if response is a valid object", but the code checks
self.track_costs, not response validity. At this point in the code flow,responseshould always be a validLLMResponseobject (otherwise an exception would have been raised earlier).Consider updating the comment to reflect the actual logic:
🔎 Proposed fix
- # Track stats only if response is a valid object + # Track stats if cost tracking is enabled if self.track_costs: self._update_stats(response)tests/test_cli.py (1)
29-36: Import pytest at module level, and clarify unused mock_stderr.Two minor improvements:
Import location:
pytestis imported inside the test function (line 33), but it's more conventional to import at the module level alongside other test dependencies.Unused parameter: The
mock_stderrparameter is accepted (line 31) but never used in the test body. If it's needed to suppress stderr output during the test, a comment explaining this would be helpful. Otherwise, consider removing the@patch("sys.stderr")decorator if stderr suppression isn't required.🔎 Proposed fix
Add pytest import at the top of the file:
import os import sys import unittest from unittest.mock import Mock, patch +import pytest sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))Then remove the local import:
@patch.dict(os.environ, {}, clear=True) @patch("sys.stderr") def test_get_api_key_not_found(self, mock_stderr): # Add 'mock_stderr' back here """When no API key or Ollama is set, it should raise ValueError""" - import pytest - with pytest.raises(ValueError, match="No AI provider configured"): self.cli._get_api_key()If stderr patching is needed, add a comment:
@patch.dict(os.environ, {}, clear=True) - @patch("sys.stderr") + @patch("sys.stderr") # Suppress error output during test def test_get_api_key_not_found(self, mock_stderr):cortex/cli.py (1)
64-87: Provider detection logic is correct, but consider consolidating error messages.The updated
_get_provider()method correctly:
- Checks for explicit provider configuration (lines 65-68)
- Auto-detects from API keys (lines 70-74)
- Only returns "ollama" when explicitly configured (lines 76-78)
- Raises
ValueErrorinstead of defaulting to Ollama (line 87)However, lines 85-86 may result in duplicate error output:
self._print_error("No AI provider configured.") # Line 85 - prints error cx_print(error_msg, "info") # Line 86 - prints same infoBoth methods call
cx_printinternally (based on_print_errorcallingcx_printper relevant_code_snippets). Consider removing line 85 or line 86 to avoid duplicate messages.🔎 Proposed fix
Option 1: Remove the redundant
_print_errorcall:# 4. If nothing is found, raise an error instead of sys.exit error_msg = ( "No AI provider configured. " "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama." ) - self._print_error("No AI provider configured.") cx_print(error_msg, "info") raise ValueError(error_msg)Option 2: Use
_print_errorfor the full message and removecx_print:error_msg = ( "No AI provider configured. " "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama." ) self._print_error(error_msg) - cx_print(error_msg, "info") raise ValueError(error_msg)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm_router.pytests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pytests/test_cli.pycortex/llm_router.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (3)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)
tests/test_cli.py (1)
cortex/cli.py (2)
_get_api_key(45-62)install(550-796)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
cortex/llm_router.py (3)
171-179: LGTM: Explicit Ollama enablement check is well-designed.The three-condition check correctly ensures Ollama is only enabled when:
- Provider is explicitly set to "ollama"
- OLLAMA_BASE_URL is set and non-empty (after stripping whitespace)
- An ollama_base_url parameter is provided
The use of
strip()on line 177 prevents empty strings from being treated as configured, which aligns with the behavior indoctor.py(line 301 per AI summary).
188-204: LGTM: Conditional initialization with appropriate logging.The conditional initialization correctly:
- Only attempts to connect when
ollama_enabledis True- Logs success concisely (line 198)
- Shows a warning only if the user explicitly tried to use Ollama but connection failed (line 201)
- Uses INFO level when Ollama is not configured (line 204), avoiding unnecessary warnings in the terminal
This aligns well with the PR objective to prevent unwanted Ollama connection attempts and improve user experience.
308-311: Good: Explicit error handling for unsupported providers.The addition of the
elseclause withValueError(lines 310-311) provides clear error messages for unsupported providers, improving debuggability.
e871844 to
78e5125
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
cortex/llm_router.py (1)
205-207: Simplify comment for production code.The comment on line 206 explains the reasoning behind using
logger.infoinstead oflogger.warning, but this level of detail is more suited for a code review discussion than production code.🔎 Suggested simplification
else: - # Change logger.warning to logger.info so it doesn't clutter the user's terminal logger.info("ℹ️ Ollama not configured, skipping local LLM initialization")tests/test_cli.py (2)
6-6: Remove unused import.The
pytestmodule is imported but never used in this file. All tests useunittest.TestCaseconventions.🔎 Proposed fix
import os import sys import unittest from unittest.mock import Mock, patch - -import pytest
32-37: Inconsistent assertion method with line 70.Line 37 uses
self.assertInto check if the exception message contains the text, while line 70 usesself.assertEqualfor an exact match. Both tests verify the same exception message from similar code paths, so they should use the same assertion method for consistency.🔎 Proposed fix for consistency
@patch.dict(os.environ, {}, clear=True) @patch("sys.stderr") # Suppress error output during test def test_get_api_key_not_found(self, mock_stderr): """When no AI provider is set, it should raise ValueError.""" with self.assertRaises(ValueError) as context: self.cli._get_api_key() - self.assertIn("No AI provider configured", str(context.exception)) + self.assertEqual("No AI provider configured", str(context.exception))This matches the exact-match approach used in
test_install_no_api_keyat line 70.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm_router.pytests/test_cli.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli.pycortex/llm_router.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (1)
cortex/cli.py (2)
_get_api_key(45-62)install(550-796)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
cortex/llm_router.py (2)
311-314: LGTM: Clear error handling for unsupported providers.The explicit
elseclause withValueErrorprovides clear feedback when an unsupported provider is encountered, improving debuggability.
318-320: Good clarification of conditional tracking.The comment update makes it explicit that stat tracking is conditional on the
track_costsflag, improving code readability.tests/test_cli.py (1)
65-70: LGTM: Test correctly validates provider requirement.The test properly verifies that
install()raises aValueErrorwith the expected message when no AI provider is configured. Thesys.stderrpatch appropriately suppresses error output during testing.
78e5125 to
364b070
Compare
|
Hi @Anshgrover23 i have addressed the changes you have mentioned and also updated the PR description accordingly. |
Kesavaraja67
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed the changes suggested by the Maintainer and Coderabbit.
|
|
@Anshgrover23 sorry for that i forgot to update the demo video now its updated and i have used |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kesavaraja67 Sorry, I merged another PR before yours and it introduced conflicts.
Could you please resolve them? Will review then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
cortex/cli.py (1)
89-92: Simplify the comment to focus on business logic.The comment "SMART FALLBACK: Alignment with llm_router whitespace handling" includes unnecessary implementation details. Consider simplifying to focus on the business logic.
🔎 Suggested comment improvement
- # 3. SMART FALLBACK: Alignment with llm_router whitespace handling + # 3. Fallback to Ollama only if explicitly configured ollama_url = os.environ.get("OLLAMA_BASE_URL", "").strip() if ollama_url: return "ollama"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pytests/test_cli.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/cli.py (2)
71-74: LGTM! Clear explicit provider check.The comment and logic correctly handle explicit provider selection via CORTEX_PROVIDER environment variable.
70-100: Excellent fix - prevents unwanted Ollama connections.This change correctly implements the PR objective to skip Ollama connection attempts when it's not explicitly configured. The four-step decision path is logical:
- Explicit CORTEX_PROVIDER (lines 71-74) ✓
- Auto-detected provider from API keys (lines 76-87) ✓
- Fallback to Ollama only if OLLAMA_BASE_URL is set (lines 89-92) ✓
- Clear error instead of silent fallback (lines 94-100) ✓
The
strip()on line 90 properly handles whitespace edge cases, and the explicit error path with ValueError is much better than the old silent fallback behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
cortex/cli.py (1)
90-101: Ollama fallback aligns with llm_router, but duplicate messaging issue persists.The whitespace handling on line 91 correctly aligns with
llm_router.py(line 182). However, the duplicate error messaging issue flagged in a past review remains: lines 96-100 print error messages, then line 101 raisesValueError, which callers likeask()(line 557) andinstall()(line 787) will catch and print again.🔎 Proposed fix to eliminate duplicate messages
- # 4. Consolidated Error Message - self._print_error("No AI provider configured.") - cx_print( - "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", - "info", - ) - raise ValueError("No AI provider configured") + # 4. If no provider is found, raise an error with helpful guidance + raise ValueError( + "No AI provider configured. " + "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or OLLAMA_BASE_URL." + )
🧹 Nitpick comments (1)
tests/test_cli.py (1)
6-6: Unused import:MagicMock.
MagicMockis imported but onlyMockappears to be used in the test file. Consider removing the unused import unless it's needed for future tests.🔎 Proposed fix
-from unittest.mock import MagicMock, Mock, patch +from unittest.mock import Mock, patch
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm_router.pytests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm_router.pycortex/cli.pytests/test_cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (3)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
info(200-202)warning(204-206)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)cortex/api_key_detector.py (1)
setup_api_key(520-546)
tests/test_cli.py (1)
cortex/cli.py (3)
_get_api_key(46-69)_get_provider(71-101)install(564-810)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 61-61: ValueError: No AI provider configured
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (11)
cortex/llm_router.py (5)
171-188: Well-implemented Ollama guardrail with proper whitespace handling.The logic correctly:
- Strips whitespace from both parameter and environment variable
- Uses truthiness checks to exclude empty strings
- Only enables Ollama when explicitly configured via
CORTEX_PROVIDER,OLLAMA_BASE_URL, or constructor parameterThis addresses the previous review feedback about inconsistent empty string handling.
190-197: Good defensive URL handling.The trailing slash stripping on line 193-195 prevents malformed URLs like
http://localhost:11434//v1when constructing the full endpoint on line 208. Settingollama_base_url = Nonewhen disabled is a safe default.
203-218: Properly guarded Ollama initialization with appropriate logging.The conditional initialization prevents connection attempts when Ollama is not configured. Using
logger.info(line 218) instead of a warning aligns with the PR objective to avoid unnecessary yellow warnings for cloud-only users.
322-325: Clean provider routing with explicit error handling.The explicit
elsebranch raisingValueErrorfor unsupported providers improves debugging and prevents silent failures.
329-331: Minor documentation improvement.The clarified comment accurately describes the conditional stats tracking behavior.
tests/test_cli.py (4)
20-27: Good test isolation pattern.The global
Path.homepatch prevents filesystem side effects and "Could not determine home directory" errors across all tests in the class.
55-68: Tests correctly follow the production call sequence.Calling
_get_api_key()before_get_provider()mirrors production behavior where_get_api_keysetsself._detected_providerthat_get_providersubsequently uses (lines 78-82 in cli.py).
85-92: Test correctly verifies install fails without provider.The test confirms that
install()properly propagates theValueError("No AI provider configured")from_get_api_key()when no provider is configured.
45-53: Test correctly verifies the no-provider error case.The test properly mocks
setup_api_keyto return failure and asserts thatValueError("No AI provider configured")is raised. The decorator order and parameter mapping are correct—decorators apply bottom-up, somock_stderrcorresponds to the innermost@patch("sys.stderr")andmock_setupcorresponds to@patch("cortex.cli.setup_api_key"). The exception message matches the one raised incortex/cli.pyat lines 61 and 69.cortex/cli.py (2)
46-69: Non-interactive guard prevents test crashes but has a subtle issue.The guard at lines 57-61 correctly prevents
setup_api_key()from showing interactive menus during tests. However, there's a potential issue: the code path after the guard (lines 63-67) will never be reached if both API keys are unset, because we already raised on line 61.The comment on line 63 says "Only if keys exist or we are interactive, try setup" but the logic actually requires at least one key to exist (interactive mode isn't detected). This is acceptable for the PR's goals but the comment is slightly misleading.
131-197: Newnotify()method is well-structured with proper validation.Good additions:
- Graceful handling of missing subcommand (line 134)
- Time format validation for DND (lines 174-179)
- Clear error messages for each failure case
The comment on lines 156-157 acknowledges the use of private
_save_config()method. Consider adding a publicsave()method toNotificationManagerin a follow-up to improve the API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/test_cli_extended.py (1)
43-55: Test correctly validates the error case.The test properly verifies that
_get_api_keyraisesValueError("No AI provider configured")when no keys are available, which aligns with the PR's goal to fail gracefully when no provider is configured.Optional cleanup: Unused mock parameters.
Both
mock_stderr(line 46) and themock_setup.return_valueconfiguration (line 49) are unused in the actual test execution. Given that the environment is cleared and no API keys exist,_get_api_keyraises the ValueError before callingsetup_api_key, making the mock_setup configuration unreachable.🔎 Optional simplification
-@patch.dict(os.environ, {}, clear=True) -@patch("cortex.cli.setup_api_key") -@patch("sys.stderr") -def test_get_api_key_not_found(self, mock_stderr, mock_setup): +@patch.dict(os.environ, {}, clear=True) +def test_get_api_key_not_found(self): """Test that _get_api_key raises ValueError when no keys are provided.""" - # Force setup_api_key to report no key found - mock_setup.return_value = (False, None, None) - - # FIX: Wrap in assertRaises to catch the new guardrail error with self.assertRaises(ValueError) as context: self.cli._get_api_key() self.assertEqual("No AI provider configured", str(context.exception))tests/integration/test_end_to_end.py (1)
119-127: Consider extending_runhelper to reduce duplication.Lines 121-122 duplicate the
effective_envcreation logic from_run(lines 29-31), and lines 125-127 duplicate the bootstrap command modification logic (lines 34-36). This test bypasses the_runhelper because it needs dev dependencies, but the helper could be extended to support both use cases.🔎 Proposed refactor to use the helper method
Extend the
_runmethod to accept an optional bootstrap command parameter:- def _run(self, command: str, env: dict | None = None) -> DockerRunResult: + def _run( + self, + command: str, + env: dict | None = None, + bootstrap: str = PIP_BOOTSTRAP + ) -> DockerRunResult: effective_env = dict(BASE_ENV) if env: effective_env.update(env) - # Apply the fix globally to the helper function so all tests pass - safe_bootstrap = PIP_BOOTSTRAP.replace( - "pip install", "pip install --root-user-action=ignore" - ) + safe_bootstrap = _make_safe_bootstrap(bootstrap) return run_in_docker( DEFAULT_IMAGE, f"{safe_bootstrap} && {command}", env=effective_env, mounts=[MOUNT], workdir="/workspace", )Then simplify the test:
def test_project_tests_run_inside_container(self): """The unified test runner should pass within the container.""" env = { "CORTEX_PROVIDER": "fake", "CORTEX_FAKE_COMMANDS": json.dumps({"commands": ["echo plan"]}), } - # FIX 1: Define effective_env to fix the NameError - effective_env = dict(BASE_ENV) - effective_env.update(env) - - # FIX 2: Suppress the pip root warning to prevent result.succeeded() from failing - bootstrap_cmd = PIP_BOOTSTRAP_DEV.replace( - "pip install", "pip install --root-user-action=ignore" - ) - - result = run_in_docker( - DEFAULT_IMAGE, - f"{bootstrap_cmd} && pytest tests/ -v --ignore=tests/integration", - env=effective_env, - mounts=[MOUNT], - workdir="/workspace", - ) + result = self._run( + "pytest tests/ -v --ignore=tests/integration", + env=env, + bootstrap=PIP_BOOTSTRAP_DEV + )
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/integration/test_end_to_end.pytests/test_cli_extended.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli_extended.pytests/integration/test_end_to_end.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli_extended.pytests/integration/test_end_to_end.py
🧬 Code graph analysis (2)
tests/test_cli_extended.py (2)
tests/test_cli.py (1)
test_get_api_key_not_found(48-53)cortex/cli.py (1)
_get_api_key(46-69)
tests/integration/test_end_to_end.py (1)
tests/integration/docker_utils.py (1)
run_in_docker(52-108)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/integration/test_end_to_end.py (1)
122-127: Consider rephrasing the comments for a more professional tone.The "FIX 1" and "FIX 2" prefixes are informal. Consider rephrasing them as regular explanatory comments that describe the purpose rather than framing them as fixes.
🔎 Suggested rephasing
- # FIX 1: Define effective_env to fix the NameError + # Create environment with BASE_ENV and test-specific overrides effective_env = dict(BASE_ENV) effective_env.update(env) - # FIX 2: Suppress the pip root warning to prevent result.succeeded() from failing + # Use safe bootstrap to suppress pip root-user warnings in Docker bootstrap_cmd = _make_safe_bootstrap(PIP_BOOTSTRAP_DEV)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/integration/test_end_to_end.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/integration/test_end_to_end.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/integration/test_end_to_end.py
🧬 Code graph analysis (1)
tests/integration/test_end_to_end.py (2)
tests/integration/docker_utils.py (1)
run_in_docker(52-108)cortex/cli.py (1)
env(955-1001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (2)
tests/integration/test_end_to_end.py (2)
37-42: LGTM!The refactoring successfully eliminates duplication by using the extracted helper function. The implementation is clean and addresses the previous review feedback.
24-26: Add return type hint.The function signature is missing a return type annotation. Per the coding guidelines, type hints are required in Python code.
🔎 Proposed fix
-def _make_safe_bootstrap(bootstrap_cmd: str) -> str: +def _make_safe_bootstrap(bootstrap_cmd: str) -> str: """Add --root-user-action=ignore to pip install commands for Docker root contexts.""" return bootstrap_cmd.replace("pip install", "pip install --root-user-action=ignore")Likely an incorrect or invalid review comment.
Kesavaraja67
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed the requested changes.
|
@Anshgrover23 i have resolved the conflict and also addressed the changes can you now review my PR and merge. |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kesavaraja67 Please follow the updated contribution guidelines. We’ve added new rules around AI usage in CONTRIBUTING.md. 2e68b29
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
cortex/cli.py (3)
71-100: Add missing docstring for_get_provider()method.This method lacks a docstring, which is required per PEP 257 and the coding guidelines.
🔎 Proposed fix
def _get_provider(self) -> str: + """ + Determine the AI provider from environment or auto-detection. + + Returns: + Provider name ("ollama", "openai", "claude", or "fake"). + + Raises: + ValueError: If no provider is configured. + """ # 1. Check environment variable for explicit provider choiceAs per coding guidelines, docstrings are required for all public APIs. Based on past review comments, this was previously flagged.
113-127: Add missing docstrings for utility methods.The utility methods
_print_error(),_print_success(),_animate_spinner(), and_clear_line()lack docstrings, which are required per PEP 257 and the coding guidelines.🔎 Proposed fix
def _print_error(self, message: str): + """Print an error message using cx_print.""" cx_print(f"Error: {message}", "error") def _print_success(self, message: str): + """Print a success message using cx_print.""" cx_print(message, "success") def _animate_spinner(self, message: str): + """Display an animated spinner with a message.""" sys.stdout.write(f"\r{self.spinner_chars[self.spinner_idx]} {message}") sys.stdout.flush() self.spinner_idx = (self.spinner_idx + 1) % len(self.spinner_chars) time.sleep(0.1) def _clear_line(self): + """Clear the current terminal line.""" sys.stdout.write("\r\033[K") sys.stdout.flush()As per coding guidelines, docstrings are required for all public APIs. Based on past review comments, this was previously flagged.
46-69: Address past review feedback: Fix return type annotation and add docstring.The return type annotation
-> str | Noneis incorrect since this method never returnsNone—it raisesValueErrorin all error paths. The annotation should be-> str.Additionally, a docstring is required per PEP 257 and the coding guidelines for this public method.
🔎 Proposed fix
- def _get_api_key(self) -> str | None: + def _get_api_key(self) -> str: + """ + Retrieve API key from environment or provider setup. + + Returns: + API key string for the configured provider. + + Raises: + ValueError: If no API provider is configured. + """ # 1. Check explicit provider override firstAs per coding guidelines, type hints and docstrings are required for all public APIs. Based on past review comments, this issue was previously flagged and remains unaddressed.
🧹 Nitpick comments (1)
cortex/cli.py (1)
130-196: Add docstring for thenotify()method.The implementation of the notification handling is well-structured with proper input validation and error handling. However, the method lacks a docstring as required by PEP 257 and the coding guidelines.
Note: The use of
mgr._save_config()(lines 157, 163, 182) is acceptable as acknowledged in the past review comments, given thatNotificationManagerdoesn't expose a public save method.🔎 Proposed addition
def notify(self, args): - """Handle notification commands""" + """ + Handle notification management commands. + + Args: + args: Namespace with notify_action and related parameters. + + Returns: + Exit code (0 for success, 1 for error). + """ # Addressing CodeRabbit feedback: Handle missing subcommand gracefullyAs per coding guidelines, docstrings are required for all public APIs.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pytests/test_cli_extended.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_cli_extended.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)cortex/api_key_detector.py (1)
setup_api_key(520-546)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (2)
cortex/cli.py (2)
90-93: LGTM: Smart Ollama fallback logic.The implementation correctly gates Ollama usage by only returning "ollama" as a provider when
OLLAMA_BASE_URLis explicitly set in the environment. This aligns with the PR objective to skip Ollama connection attempts when not configured.The
.strip()call ensures whitespace-only values are treated as unset, which is a good defensive practice.
54-61: LGTM: API key guard prevents unexpected prompts.The guard logic correctly prevents
setup_api_key()from showing interactive prompts when no API keys are configured. This addresses the PR objective to fail gracefully with clear messaging, directing users to either runcortex wizardor use Ollama offline mode.The approach preserves the interactive wizard flow (accessed via
cortex wizardcommand) while preventing unexpected prompts during install/ask operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)cortex/api_key_detector.py (1)
setup_api_key(520-546)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (2)
cortex/cli.py (2)
114-132: LGTM: Docstrings added per PEP 257.All utility methods now have concise single-line docstrings that clearly describe their purpose. Type hints are properly maintained (e.g.,
message: stron line 122).
72-101: Provider resolution logic is sound, but depends on fixing _get_api_key.The method correctly implements the fallback chain: explicit provider → detected provider → environment variables → OLLAMA_BASE_URL → error. The "SMART FALLBACK" at lines 92-95 properly checks for Ollama configuration.
However, this logic is only reachable if
_get_api_key()succeeds. For Ollama-only users (those with onlyOLLAMA_BASE_URLconfigured),_get_api_key()raises a ValueError at line 58-62 because it checks forOPENAI_API_KEYorANTHROPIC_API_KEYbefore checking for Ollama configuration. Users will never reach this provider detection code until_get_api_key()is fixed to handle Ollama-only setups.
|
Kesavaraja67
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed the changes mentioned.
|
@Anshgrover23 I have addressed all the latest critical and minor feedback. Will you review my PR and Merge. |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kesavaraja67 Thanks for the PR. This is already handled by a previously merged change that detects configured LLM providers and required keys/models before execution. With that in place, Ollama is skipped automatically when not configured or not running, so this logic is redundant now. Closing as not needed.




This PR fixes the issue where Cortex attempts to connect to a local Ollama instance even when not configured. The system now verifies user intent before initializing the Ollama client or running health diagnostics.
Key Changes
Logic Guardrail: Added ollama_enabled check in LLMRouter to verify environment variables (CORTEX_PROVIDER, OLLAMA_BASE_URL) before attempting a connection.
Graceful Skip: Replaced connection warnings with a clean [INFO] log when Ollama is not configured.
Health Check Update: Modified doctor.py to skip Ollama diagnostics if it is not the active provider, avoiding unnecessary yellow warnings for cloud-only users.
Testing
Environment: Verified on a system with port 11434 closed (TcpTestSucceeded : False).
Result: python cortex/llm_router.py now bypasses Ollama and proceeds directly to initializing cloud providers (Claude/Anthropic).
Regression: All existing tests in tests/test_llm_router.py and tests/test_doctor.py passed.
Demo Video:
cli.py.-.cortex.-.Visual.Studio.Code.2026-01-03.13-20-13.mp4
Fixes #316
AI Disclosure
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.