Skip to content

Conversation

@Kesavaraja67
Copy link
Contributor

@Kesavaraja67 Kesavaraja67 commented Jan 2, 2026

This PR fixes the issue where Cortex attempts to connect to a local Ollama instance even when not configured. The system now verifies user intent before initializing the Ollama client or running health diagnostics.

Key Changes

  1. Logic Guardrail: Added ollama_enabled check in LLMRouter to verify environment variables (CORTEX_PROVIDER, OLLAMA_BASE_URL) before attempting a connection.

  2. Graceful Skip: Replaced connection warnings with a clean [INFO] log when Ollama is not configured.

  3. Health Check Update: Modified doctor.py to skip Ollama diagnostics if it is not the active provider, avoiding unnecessary yellow warnings for cloud-only users.

Testing

  1. Environment: Verified on a system with port 11434 closed (TcpTestSucceeded : False).

  2. Result: python cortex/llm_router.py now bypasses Ollama and proceeds directly to initializing cloud providers (Claude/Anthropic).

  3. Regression: All existing tests in tests/test_llm_router.py and tests/test_doctor.py passed.

Demo Video:

cli.py.-.cortex.-.Visual.Studio.Code.2026-01-03.13-20-13.mp4

Fixes #316

AI Disclosure

  • AI/IDE/Agents used I used Gemini (Google) to assist in debugging the CLI logic, resolving unit test path errors, and refactoring the Docker integration test bootstrap command to follow DRY principles. I have manually reviewed and tested all AI-suggested changes to ensure they meet project standards and pass all local tests.

Summary by CodeRabbit

  • New Features

    • CLI now fails fast with a clear error when no AI provider is configured.
    • New CLI "notify" command to manage notifications.
    • Router exposes an indicator when Ollama is enabled.
  • Bug Fixes

    • Ollama health checks and client setup only run when configured; URL handling and info/warning messages improved.
    • Routing now raises an explicit error for unsupported providers.
  • Tests

    • Adjusted tests for no-provider behavior, stable home-path handling, and safer bootstrap in end-to-end tests.
  • Documentation

    • Clarified health-check behavior and default URL handling.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 2, 2026

📝 Walkthrough

Walkthrough

Doctor, LLM router, and CLI now gate Ollama usage: Ollama checks and client init run only when Ollama is explicitly configured (via CORTEX_PROVIDER == "ollama", OLLAMA_BASE_URL, or constructor arg). CLI raises when no provider configured; new notify(...) was added. (≤50 words)

Changes

Cohort / File(s) Summary
Ollama Health Check
cortex/doctor.py
Skip Ollama validation unless explicitly configured (CORTEX_PROVIDER == "ollama" or non-empty OLLAMA_BASE_URL). When enabled, use OLLAMA_BASE_URL or fallback http://localhost:11434 and query /api/tags. _check_api_keys now discards the third return value (assigned to _).
Ollama Initialization Guard & Routing
cortex/llm_router.py
Adds public attribute ollama_enabled; only instantiates Ollama client when enabled. Determines ollama_base_url from constructor param / env / default when enabled; logs info/warning for connection attempts. complete() now raises ValueError("Unsupported provider: ...") for unknown/unsupported providers.
CLI Provider Selection, Errors & Notify
cortex/cli.py
_get_api_key now raises when no provider configured; provider detection considers env/API keys/OLLAMA_BASE_URL (no unconditional Ollama fallback). Adds notify(self, args) to CortexCLI and explicit user guidance when no AI provider is found.
Tests & Integration
tests/test_cli.py, tests/test_cli_extended.py, tests/integration/test_end_to_end.py
Tests updated to use a global Path.home patch and to expect ValueError("No AI provider configured") when no provider is configured; mocks/decorator orders adjusted. End-to-end bootstrap uses a safe helper that appends --root-user-action=ignore.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant CLI as CLI
  participant Env as Env/Config
  participant Doctor as doctor.py
  participant Router as llm_router.py
  participant Ollama as Ollama Service
  participant Cloud as Cloud Provider

  CLI->>Env: read CORTEX_PROVIDER / OLLAMA_BASE_URL / API keys
  CLI->>Doctor: run health checks
  Doctor->>Env: check CORTEX_PROVIDER / OLLAMA_BASE_URL
  alt Ollama configured
    Doctor->>Ollama: GET /api/tags (base URL from param/env or fallback)
    Ollama-->>Doctor: tags / OK
  else Ollama not configured
    Doctor-->>CLI: INFO: skip Ollama check
  end

  CLI->>Router: request completion (provider from Env/args)
  Router->>Env: determine ollama_enabled and base URL
  alt provider == "ollama" and ollama_enabled
    Router->>Ollama: send completion request
    Ollama-->>Router: response
    Router-->>CLI: return Ollama response
  else provider == "ollama" and not enabled
    Router-->>CLI: raise ValueError("Unsupported provider: ...")
  else provider != "ollama"
    Router->>Cloud: call external provider (OpenAI/Anthropic)
    Cloud-->>Router: response
    Router-->>CLI: return response
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • mikejmorgan-ai
  • Anshgrover23

Poem

🐇 I sniffed the env and twitched my ear tonight,
I'll only wake Ollama if you grant the light.
If no URL's given, I skip the knock on the door,
Cloud keys hum on—no sleepy localhost roar.
🥕✨ — a cheerful hop from your rabbit reviewer.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Out of Scope Changes check ❓ Inconclusive Changes are primarily focused on Ollama conditional initialization. However, cli.py includes substantial refactoring of _get_api_key() and _get_provider() methods plus a new notify() method that extends beyond the core Ollama fix, potentially introducing unrelated functionality. Clarify whether the CLI method refactoring and new notify() functionality are necessary for the Ollama fix or constitute separate feature additions that should be in a separate PR.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: skipping Ollama connection attempts when Ollama is not configured, which is the core objective of this PR.
Linked Issues check ✅ Passed The code changes directly address all objectives from #316: added ollama_enabled check to skip connection when not configured [doctor.py, llm_router.py], graceful handling without warnings [doctor.py, llm_router.py], and proceeding to cloud providers when available [cli.py, llm_router.py].
Description check ✅ Passed The PR description provides clear context, key changes, testing evidence, and AI disclosure, but lacks the required template structure with checklist items.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
cortex/llm_router.py (1)

185-202: Conditional initialization achieves PR objective.

The guard correctly prevents Ollama client initialization when not configured, replacing the previous warning with an informational message. This improves the user experience for cloud-only users.

Minor suggestions for improved diagnostics:

  1. Line 196: Consider including the URL in the success message for debugging: logger.info("✅ Ollama client initialized (%s at %s)", self.ollama_model, self.ollama_base_url)
  2. Line 199: The message "connection failed" is slightly misleading since OpenAI client initialization doesn't immediately connect to the server. Consider: logger.warning("⚠️ Ollama enabled but client initialization failed: %s", e)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f18c7dd and bc99f4f.

📒 Files selected for processing (2)
  • cortex/doctor.py
  • cortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm_router.py
  • cortex/doctor.py
🧬 Code graph analysis (2)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
cortex/doctor.py (1)
cortex/validators.py (1)
  • validate_api_key (58-86)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
🔇 Additional comments (3)
cortex/doctor.py (3)

294-303: Guard logic correctly prevents unnecessary checks.

The conditional logic appropriately skips Ollama diagnostics when neither CORTEX_PROVIDER=ollama nor OLLAMA_BASE_URL is configured. The use of INFO-level logging for the skip case is better than the previous warning, improving UX for cloud-only users.

Note: Empty strings are handled correctly here since not ollama_url evaluates to True for "", causing the check to be skipped.


318-320: URL fallback logic is sound.

The use of ollama_url or "http://localhost:11434" correctly falls back to localhost when the configured URL is empty or None.


334-334: Idiomatic use of underscore for unused value.

Replacing the unused error_message variable with _ follows Python conventions for explicitly ignoring tuple unpacking values.

@Kesavaraja67
Copy link
Contributor Author

@Anshgrover23 can you review my PR.

@Anshgrover23 Anshgrover23 self-assigned this Jan 2, 2026
Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image @Kesavaraja67 Please test your implementation locally before requesting a review.

You also need to update the default / fallback provider logic here:

def _get_provider(self) -> str:

Currently, Ollama is treated as the fallback provider even when no API key is configured, which needs to be addressed.

After making these changes, please record a short demo video using the cortex CLI commands. Avoid running the Python files directly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
cortex/cli.py (1)

76-85: Good: Prevents unwanted Ollama connection attempts.

The logic correctly requires explicit Ollama configuration (OLLAMA_BASE_URL or CORTEX_PROVIDER=ollama) before returning "ollama". The fallthrough to sys.exit(1) with a clear error message addresses the PR objective to fail gracefully when no provider is configured.

However, calling sys.exit(1) from within a method that declares return type str is unconventional. Consider raising an exception instead for better testability.

🔎 Optional: Raise exception instead of sys.exit for testability
-        # 4. If nothing is found, don't just guess "ollama".
-        # Print a clear message and exit gracefully
-        self._print_error("No AI provider configured.")
-        cx_print("Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", "info")
-        sys.exit(1) # Stop execution here
+        # 4. If nothing is found, don't just guess "ollama".
+        raise RuntimeError(
+            "No AI provider configured. "
+            "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama."
+        )

The exception can be caught in calling code (like main()) which already handles RuntimeError.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c42db0b and fd3ebc2.

📒 Files selected for processing (2)
  • cortex/cli.py
  • cortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/cli.py
  • cortex/llm_router.py
🧬 Code graph analysis (2)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
cortex/llm/interpreter.py (1)
  • parse (319-377)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
🪛 GitHub Actions: CI
cortex/cli.py

[error] 81-81: ruff check failed: Trailing whitespace (W291) detected in cortex/cli.py:81. Command: 'ruff check . --output-format=github'.

🪛 GitHub Check: lint
cortex/cli.py

[failure] 81-81: Ruff (W291)
cortex/cli.py:81:61: W291 Trailing whitespace


[failure] 601-601: Ruff (W291)
cortex/cli.py:601:61: W291 Trailing whitespace

cortex/llm_router.py

[failure] 176-176: Ruff (W291)
cortex/llm_router.py:176:43: W291 Trailing whitespace


[failure] 174-174: Ruff (W293)
cortex/llm_router.py:174:1: W293 Blank line contains whitespace

🪛 GitHub Check: Lint
cortex/cli.py

[failure] 81-81: Ruff (W291)
cortex/cli.py:81:61: W291 Trailing whitespace


[failure] 601-601: Ruff (W291)
cortex/cli.py:601:61: W291 Trailing whitespace

cortex/llm_router.py

[failure] 176-176: Ruff (W291)
cortex/llm_router.py:176:43: W291 Trailing whitespace


[failure] 174-174: Ruff (W293)
cortex/llm_router.py:174:1: W293 Blank line contains whitespace

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/llm_router.py (2)

171-204: Ollama guardrail and initialization logic looks correct.

The implementation correctly:

  • Checks for explicit configuration via CORTEX_PROVIDER == "ollama", non-empty OLLAMA_BASE_URL, or constructor argument
  • Handles empty strings properly with bool(ollama_url_env) and ollama_url_env.strip() != ""
  • Falls back to default URL only when Ollama is enabled
  • Logs at INFO level (not WARNING) when Ollama is skipped, avoiding terminal noise

This addresses the PR objective to skip Ollama connection when not configured.


319-320: Good addition of explicit error path for unsupported providers.

Adding raise ValueError for unknown providers improves error handling and makes the code more defensive.

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from fd3ebc2 to 4ffbad6 Compare January 3, 2026 08:44
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
cortex/cli.py (2)

598-603: Duplicate: Hardcoded fake provider command and trailing whitespace.

Line 601 has trailing whitespace causing lint failures. Additionally, the hardcoded command ["sudo apt-get install -y nginx"] duplicates logic that should be shared with llm_router.py (lines 311-318). Both issues were already flagged in previous reviews.


81-85: Critical: sys.exit(1) breaks tests and violates separation of concerns.

Line 85 calls sys.exit(1), which terminates the entire process including test runners. The pipeline failure confirms this is causing CI failures. Methods should raise exceptions instead of calling sys.exit() directly—let the caller (e.g., main()) decide whether to exit.

🔎 Proposed fix
         # 4. If nothing is found, don't just guess "ollama".
         # Print a clear message and exit gracefully
         self._print_error("No AI provider configured.")
         cx_print("Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.", "info")
-        sys.exit(1)  # Stop execution here
+        raise ValueError("No AI provider configured. Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.")

Then ensure callers handle the exception appropriately (e.g., in main() catch ValueError and exit with code 1).

cortex/llm_router.py (2)

171-179: Duplicate: Trailing whitespace on lines 174 and 176.

Static analysis reports trailing whitespace on lines 174 and 176, causing pipeline failures. This was already flagged in previous reviews.


308-318: Critical (Duplicate): Hardcoded response overwrites actual Ollama completion.

Lines 311-318 overwrite the real Ollama response from line 309 with a hardcoded synthetic LLMResponse containing "sudo apt-get install -y nginx" and provider set to CLAUDE (not OLLAMA). This breaks all Ollama completions and appears to be accidentally committed debug/demo code. This critical issue was already flagged in previous reviews.

🔎 Proposed fix - remove hardcoded override
             elif routing.provider == LLMProvider.OLLAMA:
                 response = self._complete_ollama(messages, temperature, max_tokens, tools)
-                # Explicitly return an LLMResponse object so .get() and other attributes work
-                response = LLMResponse(
-                    content="sudo apt-get install -y nginx",
-                    provider=LLMProvider.CLAUDE,
-                    model="fake-model",
-                    tokens_used=0,
-                    cost_usd=0.0,
-                    latency_seconds=0.1,
-                )
             else:
                 raise ValueError(f"Unsupported provider: {routing.provider}")
🧹 Nitpick comments (1)
cortex/llm_router.py (1)

175-179: Minor: Redundant empty string check.

Line 177 checks ollama_url_env.strip() != "" after line 177 already checks bool(ollama_url_env). For a non-None string, bool("") is False, so the strip check is redundant. Consider simplifying to just the truthy check for cleaner code.

🔎 Suggested simplification
         self.ollama_enabled = (
             cortex_provider == "ollama" or
-            (bool(ollama_url_env) and ollama_url_env.strip() != "") or
+            bool(ollama_url_env and ollama_url_env.strip()) or
             bool(ollama_base_url)
         )

This checks truthiness after stripping, combining both checks cleanly.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd3ebc2 and 4ffbad6.

📒 Files selected for processing (2)
  • cortex/cli.py
  • cortex/llm_router.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm_router.py
  • cortex/cli.py
🧬 Code graph analysis (2)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
cortex/llm/interpreter.py (1)
  • parse (319-377)
🪛 GitHub Actions: CI
cortex/cli.py

[error] 85-85: _get_provider() called SystemExit(1) because no AI provider is configured (No AI provider configured).

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
🔇 Additional comments (2)
cortex/llm_router.py (2)

187-204: LGTM: Ollama initialization properly guarded.

The conditional initialization of Ollama clients (lines 187-204) correctly gates access behind the ollama_enabled flag and handles connection failures gracefully with appropriate logging levels (warning when enabled but failing, info when not configured).


324-324: LGTM: Clarifying comment improves code documentation.

The added comment on line 324 helpfully documents that stats tracking requires a valid response object, improving code readability.

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from 4ffbad6 to 6128711 Compare January 3, 2026 09:00
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
cortex/llm_router.py (3)

180-182: Handle whitespace-only strings defensively.

If OLLAMA_BASE_URL or ollama_base_url parameter contains only whitespace (e.g., " "), line 182 will store that whitespace string in self.ollama_base_url because whitespace is truthy in Python. While this won't cause immediate issues (since ollama_enabled correctly treats whitespace-only strings as disabled on line 177), it's poor data hygiene.

🔎 Proposed fix to normalize whitespace-only strings
 # Use the parameter, or the env var, or the default—guaranteeing no empty strings
-self.ollama_base_url = ollama_base_url or ollama_url_env or "http://localhost:11434"
+base_url_candidate = ollama_base_url or ollama_url_env
+self.ollama_base_url = (base_url_candidate.strip() if base_url_candidate and base_url_candidate.strip() else None) or "http://localhost:11434"

Or more concisely:

 # Use the parameter, or the env var, or the default—guaranteeing no empty strings
-self.ollama_base_url = ollama_base_url or ollama_url_env or "http://localhost:11434"
+self.ollama_base_url = (ollama_base_url or ollama_url_env or "").strip() or "http://localhost:11434"

316-318: Misleading comment about response validation.

The comment "Track stats only if response is a valid object" (line 316) is misleading. The code only checks the track_costs flag, not whether response is a valid object. If validation of the response was intended, it should be implemented; otherwise, the comment should be corrected.

🔎 Suggested fix

Either remove the misleading comment:

-# Track stats only if response is a valid object
 if self.track_costs:
     self._update_stats(response)

Or add actual validation if it's needed:

-# Track stats only if response is a valid object
-if self.track_costs:
+if self.track_costs and response:
     self._update_stats(response)

171-179: Trailing whitespace has been properly cleaned up.

No trailing whitespace detected in the Ollama guardrail section—verification passes.

Optional: Simplify redundant bool() wrappers for consistency.

The bool() wrappers on lines 177-178 are redundant since the or operator evaluates truthiness directly. Consider removing them for clarity, and apply the same pattern to ollama_base_url:

Suggested simplification
 self.ollama_enabled = (
     cortex_provider == "ollama"
-    or bool(ollama_url_env and ollama_url_env.strip())
-    or bool(ollama_base_url)
+    or (ollama_url_env and ollama_url_env.strip())
+    or ollama_base_url
 )
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4ffbad6 and 6128711.

📒 Files selected for processing (2)
  • cortex/cli.py
  • cortex/llm_router.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm_router.py
🧬 Code graph analysis (1)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
🔇 Additional comments (1)
cortex/llm_router.py (1)

187-204: LGTM! Conditional initialization aligns with PR objectives.

The conditional Ollama initialization correctly implements the guardrail to skip Ollama when not configured. The info-level logging (line 204) appropriately reduces noise for users who don't use local LLMs.

Note on exception handling: The try-except block (lines 199-201) may not catch actual Ollama connection failures, since OpenAI client instantiation typically doesn't connect until the first API call. Actual connection errors will surface later in _complete_ollama() (line 488), which is acceptable behavior.

If you want to verify that connection failures are properly handled, confirm that _complete_ollama() (lines 444-491) includes appropriate error handling for when Ollama is unreachable.

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from 6128711 to a74e72d Compare January 3, 2026 11:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
cortex/llm_router.py (1)

308-312: Remove leftover development comment.

Line 309 contains a development note referencing past review feedback that should be removed.

🔎 Proposed fix
             elif routing.provider == LLMProvider.OLLAMA:
-                # FIX: Remove the hardcoded LLMResponse block that followed this
                 response = self._complete_ollama(messages, temperature, max_tokens, tools)
             else:
                 raise ValueError(f"Unsupported provider: {routing.provider}")
🧹 Nitpick comments (5)
cortex/llm_router.py (1)

316-318: Misleading comment on stats tracking.

The comment "Track stats only if response is a valid object" suggests a validity check, but the code unconditionally calls _update_stats(response). The comment is misleading since there's no explicit check—the implicit guarantee is that exceptions would prevent reaching this line. Consider removing or clarifying the comment.

🔎 Proposed fix
-            # Track stats only if response is a valid object
             if self.track_costs:
                 self._update_stats(response)
tests/test_cli.py (2)

29-36: Move pytest import to module level and consider using assertRaises.

Importing pytest inside each test method is inefficient. Since this is a unittest.TestCase, you could use self.assertRaises for consistency, or move the import to the module level if you prefer pytest.raises.

Also, mock_stderr is declared but unused in the test body.

🔎 Option 1: Use unittest's assertRaises
     @patch.dict(os.environ, {}, clear=True)
     @patch("sys.stderr")
-    def test_get_api_key_not_found(self, mock_stderr):  # Add 'mock_stderr' back here
-        """When no API key or Ollama is set, it should raise ValueError"""
-        import pytest
-
-        with pytest.raises(ValueError, match="No AI provider configured"):
+    def test_get_api_key_not_found(self, mock_stderr):
+        """When no API key or Ollama is set, it should raise ValueError."""
+        with self.assertRaises(ValueError) as context:
             self.cli._get_api_key()
+        self.assertIn("No AI provider configured", str(context.exception))
🔎 Option 2: Move import to module level

Add at the top of the file:

import pytest

Then remove the inline imports from lines 33 and 67.


63-70: Same refactor opportunity as test_get_api_key_not_found.

This test has the same pattern of importing pytest inside the method and an unused mock_stderr parameter. Apply the same fix as suggested above.

🔎 Proposed fix using assertRaises
     @patch.dict(os.environ, {}, clear=True)
     @patch("sys.stderr")
-    def test_install_no_api_key(self, mock_stderr):  # Add 'mock_stderr' back here
-        """Test that install() raises ValueError when no provider is configured."""
-        import pytest
-
-        with pytest.raises(ValueError, match="No AI provider configured"):
+    def test_install_no_api_key(self, mock_stderr):
+        """Test that install() raises ValueError when no provider is configured."""
+        with self.assertRaises(ValueError) as context:
             self.cli.install("docker")
+        self.assertIn("No AI provider configured", str(context.exception))
cortex/cli.py (2)

76-78: Redundant condition in Ollama fallback check.

The condition explicit_provider == "ollama" on line 77 is redundant because if explicit_provider were "ollama", the function would have already returned at line 68. This branch can only be reached if explicit_provider is not in ["ollama", "openai", "claude", "fake"].

🔎 Proposed fix
         # 3. SMART FALLBACK: Only return "ollama" if it's explicitly set
-        if os.environ.get("OLLAMA_BASE_URL") or explicit_provider == "ollama":
+        if os.environ.get("OLLAMA_BASE_URL"):
             return "ollama"

80-87: Redundant error messaging.

The error is communicated twice:

  1. Line 85: self._print_error("No AI provider configured.") outputs with "error" styling
  2. Line 86: cx_print(error_msg, "info") outputs the same message (plus guidance) with "info" styling

Consider consolidating into a single, clear message.

🔎 Proposed fix
         # 4. If nothing is found, raise an error instead of sys.exit
-        error_msg = (
-            "No AI provider configured. "
-            "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama."
-        )
-        self._print_error("No AI provider configured.")
-        cx_print(error_msg, "info")
-        raise ValueError(error_msg)
+        self._print_error("No AI provider configured.")
+        cx_print(
+            "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.",
+            "info",
+        )
+        raise ValueError("No AI provider configured")
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6128711 and a74e72d.

📒 Files selected for processing (3)
  • cortex/cli.py
  • cortex/llm_router.py
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm_router.py
  • cortex/cli.py
  • tests/test_cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
🧬 Code graph analysis (3)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
tests/test_cli.py (1)
cortex/cli.py (2)
  • _get_api_key (45-62)
  • install (550-796)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/llm_router.py (1)

171-204: Ollama guardrail logic looks good.

The implementation correctly:

  • Checks for explicit CORTEX_PROVIDER=ollama or a non-empty OLLAMA_BASE_URL before enabling Ollama
  • Uses bool(ollama_url_env and ollama_url_env.strip()) to handle empty strings properly
  • Falls back to the default URL only when neither parameter nor env var provides a value
  • Logs at INFO level when Ollama is skipped (avoiding terminal clutter)

This aligns well with the PR objective to skip unnecessary Ollama connection attempts.

cortex/cli.py (1)

76-78: Minor inconsistency in whitespace handling with llm_router.py.

llm_router.py uses bool(ollama_url_env and ollama_url_env.strip()) to handle whitespace-only values, but cli.py uses a simple os.environ.get("OLLAMA_BASE_URL") check. If a user sets OLLAMA_BASE_URL=" " (whitespace only), cli.py would return "ollama" but llm_router.py would set ollama_enabled = False, causing a mismatch.

Consider aligning the checks:

         # 3. SMART FALLBACK: Only return "ollama" if it's explicitly set
-        if os.environ.get("OLLAMA_BASE_URL") or explicit_provider == "ollama":
+        ollama_url = os.environ.get("OLLAMA_BASE_URL", "").strip()
+        if ollama_url:
             return "ollama"

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from a74e72d to e871844 Compare January 3, 2026 11:21
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/test_cli.py (1)

63-70: Same improvements as test_get_api_key_not_found.

This test has the same pattern as test_get_api_key_not_found:

  • pytest is imported inside the function (line 67) instead of at module level
  • mock_stderr parameter is unused

Please apply the same fixes suggested for the previous test.

🧹 Nitpick comments (3)
cortex/llm_router.py (1)

315-317: Comment could be clearer.

The comment on line 315 states "Track stats only if response is a valid object", but the code checks self.track_costs, not response validity. At this point in the code flow, response should always be a valid LLMResponse object (otherwise an exception would have been raised earlier).

Consider updating the comment to reflect the actual logic:

🔎 Proposed fix
-            # Track stats only if response is a valid object
+            # Track stats if cost tracking is enabled
             if self.track_costs:
                 self._update_stats(response)
tests/test_cli.py (1)

29-36: Import pytest at module level, and clarify unused mock_stderr.

Two minor improvements:

  1. Import location: pytest is imported inside the test function (line 33), but it's more conventional to import at the module level alongside other test dependencies.

  2. Unused parameter: The mock_stderr parameter is accepted (line 31) but never used in the test body. If it's needed to suppress stderr output during the test, a comment explaining this would be helpful. Otherwise, consider removing the @patch("sys.stderr") decorator if stderr suppression isn't required.

🔎 Proposed fix

Add pytest import at the top of the file:

 import os
 import sys
 import unittest
 from unittest.mock import Mock, patch
+import pytest

 sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))

Then remove the local import:

     @patch.dict(os.environ, {}, clear=True)
     @patch("sys.stderr")
     def test_get_api_key_not_found(self, mock_stderr):  # Add 'mock_stderr' back here
         """When no API key or Ollama is set, it should raise ValueError"""
-        import pytest
-
         with pytest.raises(ValueError, match="No AI provider configured"):
             self.cli._get_api_key()

If stderr patching is needed, add a comment:

     @patch.dict(os.environ, {}, clear=True)
-    @patch("sys.stderr")
+    @patch("sys.stderr")  # Suppress error output during test
     def test_get_api_key_not_found(self, mock_stderr):
cortex/cli.py (1)

64-87: Provider detection logic is correct, but consider consolidating error messages.

The updated _get_provider() method correctly:

  • Checks for explicit provider configuration (lines 65-68)
  • Auto-detects from API keys (lines 70-74)
  • Only returns "ollama" when explicitly configured (lines 76-78)
  • Raises ValueError instead of defaulting to Ollama (line 87)

However, lines 85-86 may result in duplicate error output:

self._print_error("No AI provider configured.")  # Line 85 - prints error
cx_print(error_msg, "info")                       # Line 86 - prints same info

Both methods call cx_print internally (based on _print_error calling cx_print per relevant_code_snippets). Consider removing line 85 or line 86 to avoid duplicate messages.

🔎 Proposed fix

Option 1: Remove the redundant _print_error call:

         # 4. If nothing is found, raise an error instead of sys.exit
         error_msg = (
             "No AI provider configured. "
             "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama."
         )
-        self._print_error("No AI provider configured.")
         cx_print(error_msg, "info")
         raise ValueError(error_msg)

Option 2: Use _print_error for the full message and remove cx_print:

         error_msg = (
             "No AI provider configured. "
             "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama."
         )
         self._print_error(error_msg)
-        cx_print(error_msg, "info")
         raise ValueError(error_msg)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a74e72d and e871844.

📒 Files selected for processing (3)
  • cortex/cli.py
  • cortex/llm_router.py
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/cli.py
  • tests/test_cli.py
  • cortex/llm_router.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
🧬 Code graph analysis (3)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
tests/test_cli.py (1)
cortex/cli.py (2)
  • _get_api_key (45-62)
  • install (550-796)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (3)
cortex/llm_router.py (3)

171-179: LGTM: Explicit Ollama enablement check is well-designed.

The three-condition check correctly ensures Ollama is only enabled when:

  1. Provider is explicitly set to "ollama"
  2. OLLAMA_BASE_URL is set and non-empty (after stripping whitespace)
  3. An ollama_base_url parameter is provided

The use of strip() on line 177 prevents empty strings from being treated as configured, which aligns with the behavior in doctor.py (line 301 per AI summary).


188-204: LGTM: Conditional initialization with appropriate logging.

The conditional initialization correctly:

  • Only attempts to connect when ollama_enabled is True
  • Logs success concisely (line 198)
  • Shows a warning only if the user explicitly tried to use Ollama but connection failed (line 201)
  • Uses INFO level when Ollama is not configured (line 204), avoiding unnecessary warnings in the terminal

This aligns well with the PR objective to prevent unwanted Ollama connection attempts and improve user experience.


308-311: Good: Explicit error handling for unsupported providers.

The addition of the else clause with ValueError (lines 310-311) provides clear error messages for unsupported providers, improving debuggability.

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from e871844 to 78e5125 Compare January 3, 2026 11:52
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
cortex/llm_router.py (1)

205-207: Simplify comment for production code.

The comment on line 206 explains the reasoning behind using logger.info instead of logger.warning, but this level of detail is more suited for a code review discussion than production code.

🔎 Suggested simplification
         else:
-            # Change logger.warning to logger.info so it doesn't clutter the user's terminal
             logger.info("ℹ️  Ollama not configured, skipping local LLM initialization")
tests/test_cli.py (2)

6-6: Remove unused import.

The pytest module is imported but never used in this file. All tests use unittest.TestCase conventions.

🔎 Proposed fix
 import os
 import sys
 import unittest
 from unittest.mock import Mock, patch
-
-import pytest

32-37: Inconsistent assertion method with line 70.

Line 37 uses self.assertIn to check if the exception message contains the text, while line 70 uses self.assertEqual for an exact match. Both tests verify the same exception message from similar code paths, so they should use the same assertion method for consistency.

🔎 Proposed fix for consistency
     @patch.dict(os.environ, {}, clear=True)
     @patch("sys.stderr")  # Suppress error output during test
     def test_get_api_key_not_found(self, mock_stderr):
         """When no AI provider is set, it should raise ValueError."""
         with self.assertRaises(ValueError) as context:
             self.cli._get_api_key()
-        self.assertIn("No AI provider configured", str(context.exception))
+        self.assertEqual("No AI provider configured", str(context.exception))

This matches the exact-match approach used in test_install_no_api_key at line 70.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e871844 and 78e5125.

📒 Files selected for processing (3)
  • cortex/cli.py
  • cortex/llm_router.py
  • tests/test_cli.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_cli.py
  • cortex/llm_router.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (1)
cortex/cli.py (2)
  • _get_api_key (45-62)
  • install (550-796)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (3)
cortex/llm_router.py (2)

311-314: LGTM: Clear error handling for unsupported providers.

The explicit else clause with ValueError provides clear feedback when an unsupported provider is encountered, improving debuggability.


318-320: Good clarification of conditional tracking.

The comment update makes it explicit that stat tracking is conditional on the track_costs flag, improving code readability.

tests/test_cli.py (1)

65-70: LGTM: Test correctly validates provider requirement.

The test properly verifies that install() raises a ValueError with the expected message when no AI provider is configured. The sys.stderr patch appropriately suppresses error output during testing.

@Kesavaraja67 Kesavaraja67 force-pushed the bug/ollama-connection-316 branch from 78e5125 to 364b070 Compare January 3, 2026 12:06
@Kesavaraja67
Copy link
Contributor Author

Hi @Anshgrover23 i have addressed the changes you have mentioned and also updated the PR description accordingly.
Can you review my PR again.

Copy link
Contributor Author

@Kesavaraja67 Kesavaraja67 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed the changes suggested by the Maintainer and Coderabbit.

@Anshgrover23
Copy link
Collaborator

@Kesavaraja67

After making these changes, please record a short demo video using the cortex CLI commands. Avoid running the Python files directly.

@Kesavaraja67
Copy link
Contributor Author

Kesavaraja67 commented Jan 3, 2026

@Anshgrover23 sorry for that i forgot to update the demo video now its updated and i have used cortex install nginx --dry-run command.

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kesavaraja67 Sorry, I merged another PR before yours and it introduced conflicts.
Could you please resolve them? Will review then.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cortex/cli.py (1)

89-92: Simplify the comment to focus on business logic.

The comment "SMART FALLBACK: Alignment with llm_router whitespace handling" includes unnecessary implementation details. Consider simplifying to focus on the business logic.

🔎 Suggested comment improvement
-        # 3. SMART FALLBACK: Alignment with llm_router whitespace handling
+        # 3. Fallback to Ollama only if explicitly configured
         ollama_url = os.environ.get("OLLAMA_BASE_URL", "").strip()
         if ollama_url:
             return "ollama"
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 364b070 and 09552b6.

📒 Files selected for processing (2)
  • cortex/cli.py
  • tests/test_cli.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
🔇 Additional comments (2)
cortex/cli.py (2)

71-74: LGTM! Clear explicit provider check.

The comment and logic correctly handle explicit provider selection via CORTEX_PROVIDER environment variable.


70-100: Excellent fix - prevents unwanted Ollama connections.

This change correctly implements the PR objective to skip Ollama connection attempts when it's not explicitly configured. The four-step decision path is logical:

  1. Explicit CORTEX_PROVIDER (lines 71-74) ✓
  2. Auto-detected provider from API keys (lines 76-87) ✓
  3. Fallback to Ollama only if OLLAMA_BASE_URL is set (lines 89-92) ✓
  4. Clear error instead of silent fallback (lines 94-100) ✓

The strip() on line 90 properly handles whitespace edge cases, and the explicit error path with ValueError is much better than the old silent fallback behavior.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
cortex/cli.py (1)

90-101: Ollama fallback aligns with llm_router, but duplicate messaging issue persists.

The whitespace handling on line 91 correctly aligns with llm_router.py (line 182). However, the duplicate error messaging issue flagged in a past review remains: lines 96-100 print error messages, then line 101 raises ValueError, which callers like ask() (line 557) and install() (line 787) will catch and print again.

🔎 Proposed fix to eliminate duplicate messages
-        # 4. Consolidated Error Message
-        self._print_error("No AI provider configured.")
-        cx_print(
-            "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or configure Ollama.",
-            "info",
-        )
-        raise ValueError("No AI provider configured")
+        # 4. If no provider is found, raise an error with helpful guidance
+        raise ValueError(
+            "No AI provider configured. "
+            "Please set ANTHROPIC_API_KEY, OPENAI_API_KEY, or OLLAMA_BASE_URL."
+        )
🧹 Nitpick comments (1)
tests/test_cli.py (1)

6-6: Unused import: MagicMock.

MagicMock is imported but only Mock appears to be used in the test file. Consider removing the unused import unless it's needed for future tests.

🔎 Proposed fix
-from unittest.mock import MagicMock, Mock, patch
+from unittest.mock import Mock, patch
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09552b6 and e8a27d9.

📒 Files selected for processing (3)
  • cortex/cli.py
  • cortex/llm_router.py
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm_router.py
  • cortex/cli.py
  • tests/test_cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
🧬 Code graph analysis (3)
cortex/llm_router.py (1)
cortex/logging_system.py (2)
  • info (200-202)
  • warning (204-206)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
cortex/api_key_detector.py (1)
  • setup_api_key (520-546)
tests/test_cli.py (1)
cortex/cli.py (3)
  • _get_api_key (46-69)
  • _get_provider (71-101)
  • install (564-810)
🪛 GitHub Actions: CI
cortex/cli.py

[error] 61-61: ValueError: No AI provider configured

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.10)
🔇 Additional comments (11)
cortex/llm_router.py (5)

171-188: Well-implemented Ollama guardrail with proper whitespace handling.

The logic correctly:

  • Strips whitespace from both parameter and environment variable
  • Uses truthiness checks to exclude empty strings
  • Only enables Ollama when explicitly configured via CORTEX_PROVIDER, OLLAMA_BASE_URL, or constructor parameter

This addresses the previous review feedback about inconsistent empty string handling.


190-197: Good defensive URL handling.

The trailing slash stripping on line 193-195 prevents malformed URLs like http://localhost:11434//v1 when constructing the full endpoint on line 208. Setting ollama_base_url = None when disabled is a safe default.


203-218: Properly guarded Ollama initialization with appropriate logging.

The conditional initialization prevents connection attempts when Ollama is not configured. Using logger.info (line 218) instead of a warning aligns with the PR objective to avoid unnecessary yellow warnings for cloud-only users.


322-325: Clean provider routing with explicit error handling.

The explicit else branch raising ValueError for unsupported providers improves debugging and prevents silent failures.


329-331: Minor documentation improvement.

The clarified comment accurately describes the conditional stats tracking behavior.

tests/test_cli.py (4)

20-27: Good test isolation pattern.

The global Path.home patch prevents filesystem side effects and "Could not determine home directory" errors across all tests in the class.


55-68: Tests correctly follow the production call sequence.

Calling _get_api_key() before _get_provider() mirrors production behavior where _get_api_key sets self._detected_provider that _get_provider subsequently uses (lines 78-82 in cli.py).


85-92: Test correctly verifies install fails without provider.

The test confirms that install() properly propagates the ValueError("No AI provider configured") from _get_api_key() when no provider is configured.


45-53: Test correctly verifies the no-provider error case.

The test properly mocks setup_api_key to return failure and asserts that ValueError("No AI provider configured") is raised. The decorator order and parameter mapping are correct—decorators apply bottom-up, so mock_stderr corresponds to the innermost @patch("sys.stderr") and mock_setup corresponds to @patch("cortex.cli.setup_api_key"). The exception message matches the one raised in cortex/cli.py at lines 61 and 69.

cortex/cli.py (2)

46-69: Non-interactive guard prevents test crashes but has a subtle issue.

The guard at lines 57-61 correctly prevents setup_api_key() from showing interactive menus during tests. However, there's a potential issue: the code path after the guard (lines 63-67) will never be reached if both API keys are unset, because we already raised on line 61.

The comment on line 63 says "Only if keys exist or we are interactive, try setup" but the logic actually requires at least one key to exist (interactive mode isn't detected). This is acceptable for the PR's goals but the comment is slightly misleading.


131-197: New notify() method is well-structured with proper validation.

Good additions:

  • Graceful handling of missing subcommand (line 134)
  • Time format validation for DND (lines 174-179)
  • Clear error messages for each failure case

The comment on lines 156-157 acknowledges the use of private _save_config() method. Consider adding a public save() method to NotificationManager in a follow-up to improve the API.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/test_cli_extended.py (1)

43-55: Test correctly validates the error case.

The test properly verifies that _get_api_key raises ValueError("No AI provider configured") when no keys are available, which aligns with the PR's goal to fail gracefully when no provider is configured.

Optional cleanup: Unused mock parameters.

Both mock_stderr (line 46) and the mock_setup.return_value configuration (line 49) are unused in the actual test execution. Given that the environment is cleared and no API keys exist, _get_api_key raises the ValueError before calling setup_api_key, making the mock_setup configuration unreachable.

🔎 Optional simplification
-@patch.dict(os.environ, {}, clear=True)
-@patch("cortex.cli.setup_api_key")
-@patch("sys.stderr")
-def test_get_api_key_not_found(self, mock_stderr, mock_setup):
+@patch.dict(os.environ, {}, clear=True)
+def test_get_api_key_not_found(self):
     """Test that _get_api_key raises ValueError when no keys are provided."""
-    # Force setup_api_key to report no key found
-    mock_setup.return_value = (False, None, None)
-
-    # FIX: Wrap in assertRaises to catch the new guardrail error
     with self.assertRaises(ValueError) as context:
         self.cli._get_api_key()
 
     self.assertEqual("No AI provider configured", str(context.exception))
tests/integration/test_end_to_end.py (1)

119-127: Consider extending _run helper to reduce duplication.

Lines 121-122 duplicate the effective_env creation logic from _run (lines 29-31), and lines 125-127 duplicate the bootstrap command modification logic (lines 34-36). This test bypasses the _run helper because it needs dev dependencies, but the helper could be extended to support both use cases.

🔎 Proposed refactor to use the helper method

Extend the _run method to accept an optional bootstrap command parameter:

-    def _run(self, command: str, env: dict | None = None) -> DockerRunResult:
+    def _run(
+        self, 
+        command: str, 
+        env: dict | None = None,
+        bootstrap: str = PIP_BOOTSTRAP
+    ) -> DockerRunResult:
         effective_env = dict(BASE_ENV)
         if env:
             effective_env.update(env)
 
-        # Apply the fix globally to the helper function so all tests pass
-        safe_bootstrap = PIP_BOOTSTRAP.replace(
-            "pip install", "pip install --root-user-action=ignore"
-        )
+        safe_bootstrap = _make_safe_bootstrap(bootstrap)
 
         return run_in_docker(
             DEFAULT_IMAGE,
             f"{safe_bootstrap} && {command}",
             env=effective_env,
             mounts=[MOUNT],
             workdir="/workspace",
         )

Then simplify the test:

     def test_project_tests_run_inside_container(self):
         """The unified test runner should pass within the container."""
 
         env = {
             "CORTEX_PROVIDER": "fake",
             "CORTEX_FAKE_COMMANDS": json.dumps({"commands": ["echo plan"]}),
         }
 
-        # FIX 1: Define effective_env to fix the NameError
-        effective_env = dict(BASE_ENV)
-        effective_env.update(env)
-
-        # FIX 2: Suppress the pip root warning to prevent result.succeeded() from failing
-        bootstrap_cmd = PIP_BOOTSTRAP_DEV.replace(
-            "pip install", "pip install --root-user-action=ignore"
-        )
-
-        result = run_in_docker(
-            DEFAULT_IMAGE,
-            f"{bootstrap_cmd} && pytest tests/ -v --ignore=tests/integration",
-            env=effective_env,
-            mounts=[MOUNT],
-            workdir="/workspace",
-        )
+        result = self._run(
+            "pytest tests/ -v --ignore=tests/integration",
+            env=env,
+            bootstrap=PIP_BOOTSTRAP_DEV
+        )
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e8a27d9 and 9ffd853.

📒 Files selected for processing (2)
  • tests/integration/test_end_to_end.py
  • tests/test_cli_extended.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_cli_extended.py
  • tests/integration/test_end_to_end.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli_extended.py
  • tests/integration/test_end_to_end.py
🧬 Code graph analysis (2)
tests/test_cli_extended.py (2)
tests/test_cli.py (1)
  • test_get_api_key_not_found (48-53)
cortex/cli.py (1)
  • _get_api_key (46-69)
tests/integration/test_end_to_end.py (1)
tests/integration/docker_utils.py (1)
  • run_in_docker (52-108)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/test_end_to_end.py (1)

122-127: Consider rephrasing the comments for a more professional tone.

The "FIX 1" and "FIX 2" prefixes are informal. Consider rephrasing them as regular explanatory comments that describe the purpose rather than framing them as fixes.

🔎 Suggested rephasing
-        # FIX 1: Define effective_env to fix the NameError
+        # Create environment with BASE_ENV and test-specific overrides
         effective_env = dict(BASE_ENV)
         effective_env.update(env)
 
-        # FIX 2: Suppress the pip root warning to prevent result.succeeded() from failing
+        # Use safe bootstrap to suppress pip root-user warnings in Docker
         bootstrap_cmd = _make_safe_bootstrap(PIP_BOOTSTRAP_DEV)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ffd853 and b353623.

📒 Files selected for processing (1)
  • tests/integration/test_end_to_end.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/integration/test_end_to_end.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/integration/test_end_to_end.py
🧬 Code graph analysis (1)
tests/integration/test_end_to_end.py (2)
tests/integration/docker_utils.py (1)
  • run_in_docker (52-108)
cortex/cli.py (1)
  • env (955-1001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
🔇 Additional comments (2)
tests/integration/test_end_to_end.py (2)

37-42: LGTM!

The refactoring successfully eliminates duplication by using the extracted helper function. The implementation is clean and addresses the previous review feedback.


24-26: Add return type hint.

The function signature is missing a return type annotation. Per the coding guidelines, type hints are required in Python code.

🔎 Proposed fix
-def _make_safe_bootstrap(bootstrap_cmd: str) -> str:
+def _make_safe_bootstrap(bootstrap_cmd: str) -> str:
     """Add --root-user-action=ignore to pip install commands for Docker root contexts."""
     return bootstrap_cmd.replace("pip install", "pip install --root-user-action=ignore")

Likely an incorrect or invalid review comment.

Copy link
Contributor Author

@Kesavaraja67 Kesavaraja67 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed the requested changes.

@Kesavaraja67
Copy link
Contributor Author

@Anshgrover23 i have resolved the conflict and also addressed the changes can you now review my PR and merge.

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kesavaraja67 Please follow the updated contribution guidelines. We’ve added new rules around AI usage in CONTRIBUTING.md. 2e68b29

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
cortex/cli.py (3)

71-100: Add missing docstring for _get_provider() method.

This method lacks a docstring, which is required per PEP 257 and the coding guidelines.

🔎 Proposed fix
     def _get_provider(self) -> str:
+        """
+        Determine the AI provider from environment or auto-detection.
+        
+        Returns:
+            Provider name ("ollama", "openai", "claude", or "fake").
+            
+        Raises:
+            ValueError: If no provider is configured.
+        """
         # 1. Check environment variable for explicit provider choice

As per coding guidelines, docstrings are required for all public APIs. Based on past review comments, this was previously flagged.


113-127: Add missing docstrings for utility methods.

The utility methods _print_error(), _print_success(), _animate_spinner(), and _clear_line() lack docstrings, which are required per PEP 257 and the coding guidelines.

🔎 Proposed fix
     def _print_error(self, message: str):
+        """Print an error message using cx_print."""
         cx_print(f"Error: {message}", "error")

     def _print_success(self, message: str):
+        """Print a success message using cx_print."""
         cx_print(message, "success")

     def _animate_spinner(self, message: str):
+        """Display an animated spinner with a message."""
         sys.stdout.write(f"\r{self.spinner_chars[self.spinner_idx]} {message}")
         sys.stdout.flush()
         self.spinner_idx = (self.spinner_idx + 1) % len(self.spinner_chars)
         time.sleep(0.1)

     def _clear_line(self):
+        """Clear the current terminal line."""
         sys.stdout.write("\r\033[K")
         sys.stdout.flush()

As per coding guidelines, docstrings are required for all public APIs. Based on past review comments, this was previously flagged.


46-69: Address past review feedback: Fix return type annotation and add docstring.

The return type annotation -> str | None is incorrect since this method never returns None—it raises ValueError in all error paths. The annotation should be -> str.

Additionally, a docstring is required per PEP 257 and the coding guidelines for this public method.

🔎 Proposed fix
-    def _get_api_key(self) -> str | None:
+    def _get_api_key(self) -> str:
+        """
+        Retrieve API key from environment or provider setup.
+        
+        Returns:
+            API key string for the configured provider.
+            
+        Raises:
+            ValueError: If no API provider is configured.
+        """
         # 1. Check explicit provider override first

As per coding guidelines, type hints and docstrings are required for all public APIs. Based on past review comments, this issue was previously flagged and remains unaddressed.

🧹 Nitpick comments (1)
cortex/cli.py (1)

130-196: Add docstring for the notify() method.

The implementation of the notification handling is well-structured with proper input validation and error handling. However, the method lacks a docstring as required by PEP 257 and the coding guidelines.

Note: The use of mgr._save_config() (lines 157, 163, 182) is acceptable as acknowledged in the past review comments, given that NotificationManager doesn't expose a public save method.

🔎 Proposed addition
     def notify(self, args):
-        """Handle notification commands"""
+        """
+        Handle notification management commands.
+        
+        Args:
+            args: Namespace with notify_action and related parameters.
+            
+        Returns:
+            Exit code (0 for success, 1 for error).
+        """
         # Addressing CodeRabbit feedback: Handle missing subcommand gracefully

As per coding guidelines, docstrings are required for all public APIs.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b353623 and 9d9779c.

📒 Files selected for processing (2)
  • cortex/cli.py
  • tests/test_cli_extended.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_cli_extended.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
cortex/api_key_detector.py (1)
  • setup_api_key (520-546)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Test (Python 3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
🔇 Additional comments (2)
cortex/cli.py (2)

90-93: LGTM: Smart Ollama fallback logic.

The implementation correctly gates Ollama usage by only returning "ollama" as a provider when OLLAMA_BASE_URL is explicitly set in the environment. This aligns with the PR objective to skip Ollama connection attempts when not configured.

The .strip() call ensures whitespace-only values are treated as unset, which is a good defensive practice.


54-61: LGTM: API key guard prevents unexpected prompts.

The guard logic correctly prevents setup_api_key() from showing interactive prompts when no API keys are configured. This addresses the PR objective to fail gracefully with clear messaging, directing users to either run cortex wizard or use Ollama offline mode.

The approach preserves the interactive wizard flow (accessed via cortex wizard command) while preventing unexpected prompts during install/ask operations.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d9779c and dd6cf5c.

📒 Files selected for processing (1)
  • cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (3)
cortex/first_run_wizard.py (1)
  • _print_error (746-748)
cortex/branding.py (1)
  • cx_print (49-69)
cortex/api_key_detector.py (1)
  • setup_api_key (520-546)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.10)
🔇 Additional comments (2)
cortex/cli.py (2)

114-132: LGTM: Docstrings added per PEP 257.

All utility methods now have concise single-line docstrings that clearly describe their purpose. Type hints are properly maintained (e.g., message: str on line 122).


72-101: Provider resolution logic is sound, but depends on fixing _get_api_key.

The method correctly implements the fallback chain: explicit provider → detected provider → environment variables → OLLAMA_BASE_URL → error. The "SMART FALLBACK" at lines 92-95 properly checks for Ollama configuration.

However, this logic is only reachable if _get_api_key() succeeds. For Ollama-only users (those with only OLLAMA_BASE_URL configured), _get_api_key() raises a ValueError at line 58-62 because it checks for OPENAI_API_KEY or ANTHROPIC_API_KEY before checking for Ollama configuration. Users will never reach this provider detection code until _get_api_key() is fixed to handle Ollama-only setups.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jan 4, 2026

Copy link
Contributor Author

@Kesavaraja67 Kesavaraja67 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed the changes mentioned.

@Kesavaraja67
Copy link
Contributor Author

@Anshgrover23 I have addressed all the latest critical and minor feedback. Will you review my PR and Merge.

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kesavaraja67 Thanks for the PR. This is already handled by a previously merged change that detects configured LLM providers and required keys/models before execution. With that in place, Ollama is skipped automatically when not configured or not running, so this logic is redundant now. Closing as not needed.
Screenshot 2026-01-04 at 2 40 25 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Cortex attempts Ollama connection when not configured

2 participants