-
-
Notifications
You must be signed in to change notification settings - Fork 50
feat: TUI dashboard fix Interactive TUI Dashboard #277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a new terminal-based Cortex Dashboard (Rich TUI) and a Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI (main)
participant DashboardApp
participant UIRenderer
participant SystemMonitor
participant ProcessLister
participant ModelLister
participant CommandHistory
User->>CLI: run "cortex dashboard"
CLI->>DashboardApp: instantiate
DashboardApp->>SystemMonitor: start collection
DashboardApp->>ProcessLister: start listing
DashboardApp->>ModelLister: start model checks
DashboardApp->>CommandHistory: load history
DashboardApp->>UIRenderer: instantiate(with components)
DashboardApp->>UIRenderer: run()
loop realtime loop
UIRenderer->>SystemMonitor: collect metrics
UIRenderer->>ProcessLister: update processes
UIRenderer->>ModelLister: refresh models
UIRenderer->>CommandHistory: refresh history
UIRenderer->>User: render UI
end
User->>UIRenderer: key input (i/b/d/c/q)
UIRenderer->>UIRenderer: handle action (dialog/confirm/progress)
UIRenderer->>User: exit gracefully
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (17)
requirements-dev.txt (1)
23-26: Remove duplicate dependencies already included viarequirements.txt.Since
requirements.txtis included via-r requirements.txton line 4, andpsutil>=5.0.0andpynvml>=11.0.0are already added there, these lines are redundant. Duplicating version constraints risks divergence if updated in only one place.Apply this diff:
- -# System monitoring (for dashboard) -psutil>=5.0.0 -pynvml>=11.0.0cortex/cli.py (2)
751-768: Dashboard method implementation looks good with minor refinements possible.The error handling appropriately catches
ImportErrorfor missing dependencies and provides helpful guidance. TheKeyboardInterrupthandling correctly returns 0 for user-initiated exit.Consider logging the exception details in verbose mode for debugging purposes:
def dashboard(self): """Launch the real-time system monitoring dashboard""" try: from cortex.dashboard import DashboardApp app = DashboardApp() app.run() return 0 except ImportError as e: self._print_error(f"Dashboard dependencies not available: {e}") cx_print("Install required packages with:", "info") cx_print(" pip install psutil pynvml", "info") return 1 except KeyboardInterrupt: return 0 except Exception as e: + self._debug(f"Dashboard exception: {type(e).__name__}: {e}") self._print_error(f"Dashboard error: {e}") return 1
843-845: Suppress unused variable warning by using_prefix.The static analysis correctly identifies
dashboard_parseras unused. Since no arguments are added to this subparser, use_prefix to indicate this is intentional.# Dashboard command - dashboard_parser = subparsers.add_parser('dashboard', help='Real-time system monitoring dashboard') + _dashboard_parser = subparsers.add_parser('dashboard', help='Real-time system monitoring dashboard')Alternatively, simply don't assign the result if the reference isn't needed:
subparsers.add_parser('dashboard', help='Real-time system monitoring dashboard')test/test_dashboard.py (3)
9-15: Consider using pytest fixtures instead of repeated dynamic imports.The
load_dashboard()function is called at the start of every test, re-executing the module each time. This is inefficient and bypasses pytest's fixture system.+import pytest + +@pytest.fixture(scope="module") +def dashboard(): + """Load dashboard module once for all tests""" path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "cortex", "dashboard.py") spec = importlib.util.spec_from_file_location("dashboard", path) - dashboard = importlib.util.module_from_spec(spec) + module = importlib.util.module_from_spec(spec) - spec.loader.exec_module(dashboard) + spec.loader.exec_module(module) - return dashboard + return moduleThen update tests to accept the fixture:
def test_system_monitor(dashboard): monitor = dashboard.SystemMonitor() # ...
76-86: Prefer testing public interface over private methods.Testing
_render_*methods directly couples tests to implementation details. If the internal structure changes, tests will break even if public behavior remains correct. Consider testing the public_render_screen()output or the final rendered result.Also, the
all()assertion obscures which component failed:- assert all([header, resources, processes, hist, actions, footer, screen]), \ - "All components should render" + assert header, "Header should render" + assert resources, "Resources should render" + assert processes, "Processes should render" + assert hist, "History should render" + assert actions, "Actions should render" + assert footer, "Footer should render" + assert screen, "Screen should render"
113-148: Consider converting to standard pytest format.Since
pytest>=7.0.0is already in dev dependencies, the custom test runner is redundant. Pytest provides better reporting, parallelization, fixtures, and CI integration out of the box.The tests can be run directly with
pytest test/test_dashboard.py -vwithout the custommain()runner.If you want to keep the standalone runner capability, you can use:
if __name__ == "__main__": import pytest sys.exit(pytest.main([__file__, "-v"]))docs/DASHBOARD_IMPLEMENTATION.md (2)
51-73: Add language specifier to fenced code blocks for better rendering.The ASCII diagram and other code blocks lack language specifiers, which affects syntax highlighting in some renderers. Use
textorplaintextfor diagrams:-``` +```text ┌─────────────────────────────────────────────────────┐ │ DashboardApp (Main Orchestrator) │This also applies to other code blocks at lines 93, 121, 129, 275, 296, 316, 338, 357, 370, and 706.
731-733: Convert bare URLs to markdown links.Use proper markdown link syntax for consistency and better accessibility:
-- **Rich Library:** https://rich.readthedocs.io/ -- **psutil:** https://psutil.readthedocs.io/ -- **NVIDIA NVML (pynvml):** https://docs.nvidia.com/cuda/nvml-api/ +- **Rich Library:** [rich.readthedocs.io](https://rich.readthedocs.io/) +- **psutil:** [psutil.readthedocs.io](https://psutil.readthedocs.io/) +- **NVIDIA NVML (pynvml):** [NVML API Docs](https://docs.nvidia.com/cuda/nvml-api/)cortex/dashboard.py (9)
28-34: Chain exceptions for better debuggability.Re-raising
ImportErrorwithout chaining loses the original traceback context. Useraise ... from eto preserve the exception chain.except ImportError as e: - raise ImportError(f"rich library required: {e}. Install with: pip install rich") + raise ImportError(f"rich library required: {e}. Install with: pip install rich") from e try: import psutil except ImportError as e: - raise ImportError(f"psutil library required: {e}. Install with: pip install psutil") + raise ImportError(f"psutil library required: {e}. Install with: pip install psutil") from e
177-178: Uselogging.exceptionto capture stack traces.When logging errors in exception handlers,
logging.exceptionautomatically includes the stack trace, which aids debugging.except Exception as e: - logger.error(f"Metrics error: {e}") + logger.exception("Metrics error: %s", e)
184-188: Usefrozensetfor immutable class constant.
KEYWORDSis a class-level constant that shouldn't be mutated. Usingfrozensetmakes immutability explicit and satisfies the static analysis warning about mutable class attributes.- KEYWORDS = { + KEYWORDS: frozenset[str] = frozenset({ 'python', 'node', 'ollama', 'llama', 'bert', 'gpt', 'transformers', 'inference', 'pytorch', 'tensorflow', 'cortex', 'cuda' - } + })Also note that
'python'and'node'are very broad keywords that may match many unrelated processes. Consider whether more specific filtering is needed.
215-216: Uselogging.exceptionfor error logging in exception handlers.except Exception as e: - logger.error(f"Process listing error: {e}") + logger.exception("Process listing error: %s", e)
371-372: Rename unused loop variable.The
action_typevariable is not used within the loop body. Rename it to_action_typeto indicate it's intentionally unused.- for key, name, action_type in action_items: + for key, name, _action_type in action_items: actions.append(f"[cyan]{key}[/cyan] {name}")
402-467: Consider refactoring to reduce cognitive complexity.SonarCloud flags this method with cognitive complexity of 19 (allowed: 15). Consider extracting helper methods for distinct rendering concerns:
You could extract:
- Progress bar rendering (lines 416-421) into
_render_progress_bar()- Doctor results rendering (lines 432-436) into
_render_doctor_results()- Status messages rendering (lines 445-448) into
_render_status_messages()This would improve readability and testability while reducing complexity.
734-788: Refactorrun()method to reduce cognitive complexity.SonarCloud flags cognitive complexity of 19 (allowed: 15). The method handles terminal setup, threading, keyboard input, and cleanup in one place.
Consider extracting:
- Terminal setup/teardown into context manager or helper methods
- The main loop body into a separate method
Also, the try-except-pass blocks at lines 745-746 and 786-787 silently swallow errors. At minimum, log these at debug level:
try: old_settings = termios.tcgetattr(sys.stdin) tty.setcbreak(sys.stdin.fileno()) - except Exception: - pass + except Exception as e: + logger.debug("Failed to set terminal mode: %s", e)
758-759: Uselogging.exceptionfor error logging.except Exception as e: - logger.error(f"Monitor error: {e}") + logger.exception("Monitor error: %s", e)
50-52: Avoid global logging configuration in module.
logging.basicConfig()configures the root logger and affects all modules. This could interfere with logging configuration set by the main application or other libraries.Configure only this module's logger instead:
-# Suppress verbose logging -logging.basicConfig(level=logging.WARNING) logger = logging.getLogger(__name__) +logger.setLevel(logging.WARNING)Or better, let the application's entry point configure logging globally.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/cli.py(3 hunks)cortex/dashboard.py(1 hunks)docs/DASHBOARD_IMPLEMENTATION.md(1 hunks)requirements-dev.txt(1 hunks)requirements.txt(1 hunks)test/test_dashboard.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
cortex/cli.py (2)
cortex/dashboard.py (2)
run(734-787)run(799-813)cortex/branding.py (1)
cx_print(55-75)
cortex/dashboard.py (1)
src/progress_tracker.py (1)
elapsed_time(69-74)
🪛 GitHub Check: SonarCloud Code Analysis
cortex/cli.py
[warning] 844-844: Remove the unused local variable "dashboard_parser".
cortex/dashboard.py
[failure] 402-402: Refactor this function to reduce its Cognitive Complexity from 19 to the 15 allowed.
[failure] 734-734: Refactor this function to reduce its Cognitive Complexity from 19 to the 15 allowed.
[warning] 89-89: Replace the type hint "datetime" with "Optional[datetime]" or don't assign "None" to "timestamp"
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
🪛 Ruff (0.14.7)
test/test_dashboard.py
135-135: Do not catch blind exception: Exception
(BLE001)
cortex/cli.py
758-758: Consider moving this statement to an else block
(TRY300)
766-766: Do not catch blind exception: Exception
(BLE001)
844-844: Local variable dashboard_parser is assigned to but never used
Remove assignment to unused variable dashboard_parser
(F841)
cortex/dashboard.py
29-29: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
29-29: Avoid specifying long messages outside the exception class
(TRY003)
34-34: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
34-34: Avoid specifying long messages outside the exception class
(TRY003)
138-138: Do not catch blind exception: Exception
(BLE001)
163-163: Do not catch blind exception: Exception
(BLE001)
177-177: Do not catch blind exception: Exception
(BLE001)
178-178: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
184-188: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
215-215: Do not catch blind exception: Exception
(BLE001)
216-216: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
247-247: Do not catch blind exception: Exception
(BLE001)
371-371: Loop control variable action_type not used within loop body
Rename unused action_type to _action_type
(B007)
730-730: Do not catch blind exception: Exception
(BLE001)
745-746: try-except-pass detected, consider logging the exception
(S110)
745-745: Do not catch blind exception: Exception
(BLE001)
758-758: Do not catch blind exception: Exception
(BLE001)
759-759: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
786-787: try-except-pass detected, consider logging the exception
(S110)
786-786: Do not catch blind exception: Exception
(BLE001)
809-809: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (7)
cortex/cli.py (1)
894-895: LGTM!The command routing follows the established pattern and correctly returns the exit code from
cli.dashboard().test/test_dashboard.py (1)
48-59: Test may cause state pollution by adding commands to history.The test calls
history.add_command("test")which may persist beyond the test run ifCommandHistorywrites to disk. This could pollute subsequent test runs or actual user history.Consider mocking the history or using a temporary/isolated history file for tests.
requirements.txt (1)
13-15: Consider makingpynvmlan optional dependency.
pynvmlrequires NVIDIA GPU drivers and will fail to install or import on systems without NVIDIA hardware (AMD GPUs, Intel-only, ARM devices). Since the dashboard gracefully degrades when GPU monitoring is unavailable, consider:
- Moving
pynvmlto an optional extras group (e.g., viaextras_requirein setup.py/pyproject.toml), or- Documenting that installation may fail on non-NVIDIA systems and users can safely ignore or skip it.
cortex/dashboard.py (4)
55-78: LGTM!The enum definitions are clean, well-documented, and appropriately cover the dashboard states and actions.
224-260: LGTM!The CommandHistory class handles shell history loading gracefully with proper error handling and thread-safe access. The deque with maxlen is a good choice for bounded storage.
790-814: LGTM!The DashboardApp class provides a clean entry point with proper error handling and cleanup. The 1-second startup delay (line 805) provides visual feedback to the user.
816-823: LGTM!Clean entry point with proper
__main__guard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds a terminal-based interactive dashboard feature to Cortex that provides real-time system monitoring (CPU, RAM, GPU), process tracking, and command history display. The dashboard integrates keyboard navigation for switching between tabs and executing actions like package installation, system benchmarks, and health checks. The implementation uses the Rich library for UI rendering, psutil for system metrics, and pynvml for optional GPU monitoring.
Key Changes
- Adds new
cortex dashboardCLI command that launches an interactive TUI - Implements real-time system resource monitoring with configurable refresh rates
- Provides multi-tab interface with keyboard navigation (Tab, 1-4, q keys) for installation, benchmarking, and system diagnostics
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 45 comments.
Show a summary per file
| File | Description |
|---|---|
| cortex/dashboard.py | Core dashboard implementation with SystemMonitor, ProcessLister, CommandHistory, UIRenderer, and DashboardApp classes (823 lines) |
| cortex/cli.py | Adds dashboard() method and argument parser integration for new command |
| requirements.txt | Adds psutil>=5.0.0 and pynvml>=11.0.0 dependencies for system monitoring |
| requirements-dev.txt | Duplicates system monitoring dependencies (should reference requirements.txt instead) |
| test/test_dashboard.py | Test suite using standalone functions instead of unittest.TestCase pattern (inconsistent with project conventions) |
| docs/DASHBOARD_IMPLEMENTATION.md | Comprehensive 760-line implementation guide with architecture details, usage examples, and troubleshooting |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
test/test_dashboard.py (2)
20-147: Align test structure with the rest of the suite (unittest/TestCase vs custom main)This module uses free functions and a custom
main()loop rather than theunittest.TestCasepattern used elsewhere intest/, which was already raised in earlier review comments. Consider refactoring into aTestDashboardtest case so it plugs cleanly into the existing runner and remains consistent.
64-97: Strengthen UI renderer assertions and add interaction coverage
test_ui_renderer()only assertsall([header, resources, ...]), which will always be truthy for non-None objects and doesn’t validate content or behavior. It also doesn’t exercise keyboard-driven flows (Tab, 1–4, input mode) or progress/doctor/bench workflows that are core to this feature and previously flagged as missing.- # Test rendering - header = ui._render_header() - resources = ui._render_resources() - processes = ui._render_processes() - hist = ui._render_history() - actions = ui._render_actions() - footer = ui._render_footer() - screen = ui._render_screen() - - assert all([header, resources, processes, hist, actions, footer, screen]), \ - "All components should render" + # Test rendering + header = ui._render_header() + resources = ui._render_resources() + processes = ui._render_processes() + hist = ui._render_history() + actions = ui._render_actions() + footer = ui._render_footer() + screen = ui._render_screen() + + for name, component in { + "header": header, + "resources": resources, + "processes": processes, + "history": hist, + "actions": actions, + "footer": footer, + "screen": screen, + }.items(): + assert len(str(component)) > 0, f"{name} should have content" + + # Follow‑up: consider tests that simulate key input and verify: + # - Tab cycles HOME/PROGRESS tabs + # - Keys 1–4 start/cancel install/bench/doctor + # - Input mode correctly handles Enter/Escape/Backspace and character input + # - Progress state is updated as expected for install/bench/doctor flowsIf you’d like, I can sketch concrete tests that drive
_handle_key_pressand assert oncurrent_tab,installation_progress.state, and thedoctor_results/bench_statusfields.cortex/dashboard.py (1)
150-159: Ensure NVML is properly shut down to avoid GPU resource leaks
SystemMonitor._init_gpu()callspynvml.nvmlInit()but there’s no corresponding shutdown, so the NVML handle is never released. Adding a smallshutdown()hook and invoking it fromDashboardAppcleanup will address this (this was also mentioned in earlier reviews).class SystemMonitor: @@ def _init_gpu(self): """Initialize GPU monitoring if available""" if not GPU_AVAILABLE or pynvml is None: return try: pynvml.nvmlInit() self.gpu_initialized = True except Exception as e: logger.debug(f"GPU init failed: {e}") + + def shutdown(self) -> None: + """Release GPU monitoring resources if initialized.""" + if self.gpu_initialized and pynvml is not None: + try: + pynvml.nvmlShutdown() + except Exception as e: + logger.debug(f"GPU shutdown error: {e}") + finally: + self.gpu_initialized = False @@ def run(self): """Run the app""" console = Console() try: @@ except Exception as e: console.print(f"[red]Error: {e}[/red]") finally: self.ui.running = False + # Ensure system monitor resources (e.g. NVML) are cleaned up. + try: + self.monitor.shutdown() + except Exception: + logger.debug("Failed to shutdown SystemMonitor cleanly") console.print("\n[yellow]Dashboard shutdown[/yellow]")Please double‑check NVML shutdown behavior against the
pynvmldocs in your target environment.Also applies to: 830-844
🧹 Nitpick comments (5)
test/test_dashboard.py (1)
1-17: Simplify dashboard loading and avoid manual spec/path logic
load_dashboard()plus thesys.path.insert()andspec_from_file_locationdance are more complex and brittle than needed; you can load the module via its package path and let Python handle resolution.-import sys -import os -import importlib.util +import sys +import os +import importlib @@ -# Add parent directory to path -sys.path.insert(0, os.path.dirname(os.path.dirname(__file__))) +# Ensure project root (containing the `cortex` package) is on sys.path +sys.path.insert(0, os.path.dirname(os.path.dirname(__file__))) @@ -def load_dashboard(): - """Load dashboard module""" - path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "cortex", "dashboard.py") - spec = importlib.util.spec_from_file_location("dashboard", path) - if spec is None or spec.loader is None: - raise ImportError("Failed to load dashboard module") - dashboard = importlib.util.module_from_spec(spec) - spec.loader.exec_module(dashboard) - return dashboard +def load_dashboard(): + """Load dashboard module via the installed/importable package.""" + return importlib.import_module("cortex.dashboard")cortex/dashboard.py (4)
200-208: AnnotateKEYWORDSas aClassVarto satisfy mutability/type-checking rules
ProcessLister.KEYWORDSis a mutable class attribute and Ruff flagged it; marking it asClassVar[Set[str]]clarifies intent and silences the warning.-from typing import Dict, List, Optional, Tuple, Callable +from typing import ClassVar, Dict, List, Optional, Tuple, Callable, Set @@ class ProcessLister: """Lists running inference processes""" - KEYWORDS = { + KEYWORDS: ClassVar[Set[str]] = { 'python', 'node', 'ollama', 'llama', 'bert', 'gpt', 'transformers', 'inference', 'pytorch', 'tensorflow', - 'cortex', 'cuda' - } + 'cortex', 'cuda', + }Also applies to: 11-11
384-395: Clean up unused loop variable in_render_actionsThe
action_typeloop variable is unused, which Ruff flagged; renaming it to_action_type(or dropping it) makes the intent explicit without behavior change.- for key, name, action_type in action_items: + for key, name, _action_type in action_items: actions.append(f"[cyan]{key}[/cyan] {name}")
165-197: Uselogger.exceptionwhere you’re already catching broad exceptionsIn
SystemMonitor.update_metrics,ProcessLister.update_processes, andmonitor_loop(), you catchExceptionand log withlogger.error(f"... {e}"). Usinglogger.exception()will capture stack traces and makes debugging runtime issues easier without changing behavior.def update_metrics(self): @@ - except Exception as e: - logger.error(f"Metrics error: {e}") + except Exception: + logger.exception("Metrics error") @@ def update_processes(self): @@ - except Exception as e: - logger.error(f"Process listing error: {e}") + except Exception: + logger.exception("Process listing error") @@ def monitor_loop(): while self.running: try: self.monitor.update_metrics() self.lister.update_processes() @@ - except Exception as e: - logger.error(f"Monitor error: {e}") + except Exception: + logger.exception("Monitor error") time.sleep(1.0)Also applies to: 213-235, 769-781
423-488: Consider splitting_render_progress_panelandrun()to reduce complexitySonar is flagging both
_render_progress_panelandUIRenderer.run()for cognitive complexity; both methods mix several responsibilities (formatting text, mapping states to titles, time/status handling, keyboard loop, and thread/terminal management).Possible decompositions (no behavior change):
- For
_render_progress_panel:
- Extract helpers like
_format_progress_bar(),_format_doctor_results(),_format_libraries_section(), and_format_status_messages()and assemble their returns.- For
run():
- Move
monitor_loopto a private method, and/or extract_render_loop(live)that contains the keyboard read +live.update()logic.This will make it easier to evolve the dashboard (e.g., add more operations/tabs) without fighting complex monolithic methods.
Also applies to: 754-809
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/dashboard.py(1 hunks)requirements.txt(1 hunks)test/test_dashboard.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- requirements.txt
🧰 Additional context used
🧬 Code graph analysis (1)
test/test_dashboard.py (1)
cortex/dashboard.py (19)
SystemMonitor(120-197)update_metrics(165-197)get_metrics(160-163)ProcessLister(200-240)update_processes(213-235)get_processes(237-240)CommandHistory(243-278)get_history(275-278)add_command(269-273)UIRenderer(281-808)_render_header(327-341)_render_resources(343-357)_render_processes(359-368)_render_history(370-379)_render_actions(381-401)_render_footer(501-504)_render_screen(506-518)DashboardApp(811-844)main(847-850)
🪛 GitHub Check: SonarCloud Code Analysis
cortex/dashboard.py
[failure] 754-754: Refactor this function to reduce its Cognitive Complexity from 20 to the 15 allowed.
[failure] 423-423: Refactor this function to reduce its Cognitive Complexity from 19 to the 15 allowed.
🪛 Ruff (0.14.7)
test/test_dashboard.py
14-14: Avoid specifying long messages outside the exception class
(TRY003)
137-137: Do not catch blind exception: Exception
(BLE001)
cortex/dashboard.py
27-27: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
27-27: Avoid specifying long messages outside the exception class
(TRY003)
32-32: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
32-32: Avoid specifying long messages outside the exception class
(TRY003)
157-157: Do not catch blind exception: Exception
(BLE001)
182-182: Do not catch blind exception: Exception
(BLE001)
196-196: Do not catch blind exception: Exception
(BLE001)
197-197: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
203-207: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
234-234: Do not catch blind exception: Exception
(BLE001)
235-235: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
266-266: Do not catch blind exception: Exception
(BLE001)
392-392: Loop control variable action_type not used within loop body
Rename unused action_type to _action_type
(B007)
750-750: Do not catch blind exception: Exception
(BLE001)
765-765: Do not catch blind exception: Exception
(BLE001)
779-779: Do not catch blind exception: Exception
(BLE001)
780-780: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
807-807: Do not catch blind exception: Exception
(BLE001)
840-840: Do not catch blind exception: Exception
(BLE001)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
requirements-dev.txt (1)
20-21: Address duplication of psutil and pynvml across requirements files.Lines 20–21 add
psutil>=5.0.0andpynvml>=11.0.0to development dependencies. A prior review flagged that these dependencies are duplicated in bothrequirements.txtandrequirements-dev.txt, creating a maintenance burden.Clarify the intended scope:
- If these are runtime dependencies (as they likely are for the dashboard feature), they should reside only in
requirements.txtand be referenced inrequirements-dev.txtvia-r requirements.txt.- If these are development-only dependencies (e.g., for testing or mocking), they should be removed from
requirements.txtand kept only here.Run the following script to confirm whether these dependencies also exist in
requirements.txt:#!/bin/bash # Check for psutil and pynvml in both requirements files echo "=== Checking requirements.txt ===" rg -n 'psutil|pynvml' requirements.txt || echo "Not found in requirements.txt" echo "" echo "=== Checking requirements-dev.txt ===" rg -n 'psutil|pynvml' requirements-dev.txt || echo "Not found in requirements-dev.txt"
|
@mikejmorgan-ai This PR is also ready for merge. All checks and test passes. |
Merge Conflicts - Action RequiredHey @Sahilbhatane! This PR has merge conflicts with the main branch. Please:
Once resolved, we'll review for merge! |
33a898f to
9ec599f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_interpreter.py (1)
179-184: Remove redundant mock_cache setup.There's an inconsistency in this test: a
mock_cacheis created and configured (lines 179-180), passed during initialization (line 182), but then immediately nullified (line 184). This makes the mock_cache setup redundant.If the intent is to test without caching (which
interpreter.cache = Nonesuggests), simplify by removing the mock_cache setup and not passing the cache parameter during initialization:🔎 Proposed refactor to remove redundant mock_cache
- mock_cache = Mock() - mock_cache.get_commands.return_value = None - - interpreter = CommandInterpreter(api_key=self.api_key, provider="openai", cache=mock_cache) + interpreter = CommandInterpreter(api_key=self.api_key, provider="openai") interpreter.client = mock_client interpreter.cache = NoneAlternatively, if you want to test with a cache that returns None (cache miss), remove line 184 instead.
♻️ Duplicate comments (13)
cortex/dashboard.py (10)
8-8: Remove unused imports.The following imports are not used in the code:
- Line 8:
queue- Line 13:
Callablefromcollections.abc- Line 24:
Tablefromrich.tableAlso applies to: 13-13, 24-24
26-27: Improve error messages for missing dependencies.The error messages include raw exception details that may confuse users. Provide clearer, actionable guidance instead.
Also applies to: 29-32
173-173: Blocking CPU measurement adds unnecessary latency.
psutil.cpu_percent(interval=0.1)blocks for 100ms on every call. Useinterval=Nonefor non-blocking calls after the first measurement to improve responsiveness.
276-284: Shell history loading silently ignores decoding errors.Using
errors='ignore'makes the code robust but could mask corrupted or malicious data. Consider logging when decoding errors occur.
604-641: Potential race conditions on operation flags.The
bench_running,doctor_running, and installation progress state are accessed from multiple threads without synchronization. While the GIL provides some protection, consider usingthreading.Lockorthreading.Eventfor more robust thread safety.
814-816: Windows keyboard input needs error handling.
msvcrt.getch().decode('utf-8')can raiseUnicodeDecodeErroron non-UTF-8 bytes. Already usingerrors='ignore'on line 815, which is good. This handles the issue.
330-330: Consider extracting magic numbers as named constants.Multiple magic numbers throughout the code would benefit from being extracted as named constants for maintainability. While
MAX_LIBRARIES_DISPLAY = 5is already a good constant (line 301), similar treatment would help with:
- UI dimensions (bar width: 20, truncation lengths)
- Thresholds (75%, 50% for color coding)
- Timing values (sleep durations, refresh rates)
This was flagged in past reviews and remains a recommended refactor for improved maintainability.
Also applies to: 337-340, 382-382, 393-393, 453-454, 586-586, 633-633, 692-692, 790-790, 876-876, 886-886, 921-921
160-161: GPU library should be shut down on cleanup.
pynvml.nvmlInit()is called butpynvml.nvmlShutdown()is never called, causing a resource leak. Add a shutdown method and call it from cleanup paths.
621-641: Fix cancellation to preserve FAILED state.Background threads in
run_bench()andrun_doctor()unconditionally set state toCOMPLETEDeven when the user cancels. This overwrites theFAILEDstate set by_cancel_operation(). The threads should check if state isFAILEDbefore setting it toCOMPLETED.Also applies to: 659-707
670-670: Hardcoded disk path won't work on Windows.The disk usage check uses hardcoded
/which doesn't exist on Windows. Use platform-aware path like'/' if platform.system() != 'Windows' else 'C:\\'or wrap in try-except.tests/test_dashboard.py (2)
87-89: Weak assertion doesn't verify rendering correctness.The
all()check on a list of objects only verifies they are truthy, not that they rendered correctly. Rich Panel objects are always truthy, so this assertion will pass even if rendering fails.🔎 Suggested improvement
- assert all( - [header, resources, processes, hist, actions, footer, screen] - ), "All components should render" + # Verify components are Panel/Group types and have content + from rich.panel import Panel + from rich.console import Group + + assert isinstance(header, Panel), "Header should be a Panel" + assert isinstance(resources, Panel), "Resources should be a Panel" + assert isinstance(processes, Panel), "Processes should be a Panel" + assert isinstance(hist, Panel), "History should be a Panel" + assert isinstance(actions, Panel), "Actions should be a Panel" + assert isinstance(footer, Panel), "Footer should be a Panel" + assert isinstance(screen, Group), "Screen should be a Group"
101-113: Missing test coverage for core interactive features.Based on learnings, tests should maintain >80% coverage. The following core features from the PR description lack test coverage:
- Keyboard input handling (
_handle_key_press, Tab/arrow keys, 1-4 action keys)- Tab switching between HOME and PROGRESS
- Installation workflow (
_start_installation,_submit_installation_input)- Benchmark execution (
_start_bench)- Doctor checks (
_start_doctor)- Operation cancellation (
_cancel_operation)Consider adding integration tests for these workflows to meet the coverage requirement.
docs/DASHBOARD_IMPLEMENTATION.md (1)
104-104: Update test suite line count for accuracy.The documentation states "200+ lines" but
tests/test_dashboard.pyis approximately 151 lines. Consider updating to reflect the actual size or removing the line count to avoid future drift.
🧹 Nitpick comments (6)
cortex/hardware_detection.py (2)
316-317: Overly broad exception suppression.Using
contextlib.suppress(builtins.BaseException)will swallowKeyboardInterruptandSystemExit. Consider usingExceptionfor consistency with other exception handling in this file.🔎 Proposed fix
- with contextlib.suppress(builtins.BaseException): + with contextlib.suppress(Exception): info.kernel_version = self._uname().release
410-434: Method has side effects that may be surprising.
_parse_lspci_gpu_lineboth returns aGPUInfoobject and modifiesinfo.has_nvidia_gpu/info.has_amd_gpuas a side effect. Consider returning a tuple(GPUInfo, vendor_flags)or setting these flags in the caller based on the returnedGPUInfo.vendor.Also, the quoted type hint
"GPUInfo | None"is unnecessary sinceGPUInfois defined earlier in the file.🔎 Alternative: set flags in caller based on returned vendor
def _parse_lspci_gpu_line(self, line: str, info: SystemInfo) -> "GPUInfo | None": + def _parse_lspci_gpu_line(self, line: str) -> GPUInfo | None: """Parse a single `lspci -nn` line into a GPUInfo if it looks like a GPU entry.""" if "VGA" not in line and "3D" not in line and "Display" not in line: return None gpu = GPUInfo() pci_match = re.search(r"\[([0-9a-fA-F]{4}:[0-9a-fA-F]{4})\]", line) if pci_match: gpu.pci_id = pci_match.group(1) upper = line.upper() if "NVIDIA" in upper: gpu.vendor = GPUVendor.NVIDIA - info.has_nvidia_gpu = True gpu.model = self._extract_gpu_model(line, "NVIDIA") elif "AMD" in upper or "ATI" in upper: gpu.vendor = GPUVendor.AMD - info.has_amd_gpu = True gpu.model = self._extract_gpu_model(line, "AMD") elif "INTEL" in upper: gpu.vendor = GPUVendor.INTEL gpu.model = self._extract_gpu_model(line, "Intel") return gpuThen in the caller:
parsed = self._parse_lspci_gpu_line(line) if parsed is not None: info.gpu.append(parsed) if parsed.vendor == GPUVendor.NVIDIA: info.has_nvidia_gpu = True elif parsed.vendor == GPUVendor.AMD: info.has_amd_gpu = Truecortex/config_manager.py (1)
460-463: Consider distinguishing between missing version and malformed data.The current logic treats both missing versions and non-string versions as upgrade requests. While this is defensive, it may silently accept malformed package entries. Consider validating the version type:
🔎 Suggested refactor
+ # Validate version field type + if version is not None and not isinstance(version, str): + return "skip", None + # If the config doesn't specify a version, treat it as an upgrade/install request. - if not isinstance(version, str) or not version: + if not version: return "upgrade", {**pkg, "current_version": current_version}tests/test_dashboard.py (1)
9-17: Consider using standard imports instead of dynamic loading.Dynamic module loading bypasses Python's normal import system, which can cause issues with:
- IDE auto-completion and type checking
- Test runner discovery
- Code coverage measurement
- Import caching and module lifecycle
🔎 Suggested refactor
-def load_dashboard(): - """Load dashboard module""" - path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "cortex", "dashboard.py") - spec = importlib.util.spec_from_file_location("dashboard", path) - if spec is None or spec.loader is None: - raise ImportError("Failed to load dashboard module") - dashboard = importlib.util.module_from_spec(spec) - spec.loader.exec_module(dashboard) - return dashboard +# At top of file: +from cortex import dashboardThen replace
dashboard = load_dashboard()with just using thedashboardmodule directly.docs/DASHBOARD_IMPLEMENTATION.md (1)
731-733: Format URLs as markdown links for better readability.The bare URLs should be formatted as markdown links per static analysis hints.
🔎 Suggested formatting
-- **Rich Library:** https://rich.readthedocs.io/ -- **psutil:** https://psutil.readthedocs.io/ -- **NVIDIA NVML (pynvml):** https://docs.nvidia.com/cuda/nvml-api/ +- **Rich Library:** [https://rich.readthedocs.io/](https://rich.readthedocs.io/) +- **psutil:** [https://psutil.readthedocs.io/](https://psutil.readthedocs.io/) +- **NVIDIA NVML (pynvml):** [https://docs.nvidia.com/cuda/nvml-api/](https://docs.nvidia.com/cuda/nvml-api/)cortex/dashboard.py (1)
604-617: Non-atomic state checks in operation start methods.The pattern
if not self.bench_running and self.installation_progress.state not in [...]followed byself.bench_running = Trueis not atomic. Multiple concurrent calls could theoretically pass the check simultaneously, though in practice this is unlikely for user-initiated UI actions.For more robust thread safety, consider using a lock around the check-and-set operations, or verify that the UI guarantees single-threaded access to these methods.
Also applies to: 643-656, 737-751
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
cortex/cli.py(3 hunks)cortex/config_manager.py(4 hunks)cortex/dashboard.py(1 hunks)cortex/hardware_detection.py(13 hunks)docs/DASHBOARD_IMPLEMENTATION.md(1 hunks)requirements-dev.txt(1 hunks)requirements.txt(1 hunks)tests/test_cli.py(1 hunks)tests/test_dashboard.py(1 hunks)tests/test_interpreter.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- requirements-dev.txt
- cortex/cli.py
- requirements.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_dashboard.pycortex/config_manager.pytests/test_cli.pytests/test_interpreter.pycortex/hardware_detection.pycortex/dashboard.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_dashboard.pytests/test_cli.pytests/test_interpreter.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to tests/**/*.py : Maintain >80% test coverage for pull requests
Applied to files:
tests/test_dashboard.py
🧬 Code graph analysis (4)
tests/test_dashboard.py (1)
cortex/dashboard.py (19)
SystemMonitor(128-202)update_metrics(170-202)get_metrics(165-168)ProcessLister(205-257)update_processes(227-252)get_processes(254-257)CommandHistory(260-295)get_history(292-295)add_command(286-290)UIRenderer(298-893)_render_header(344-358)_render_resources(360-374)_render_processes(376-385)_render_history(387-396)_render_actions(398-420)_render_footer(543-548)_render_screen(550-559)DashboardApp(896-931)main(934-937)
tests/test_cli.py (1)
cortex/cli.py (3)
_print_status(76-85)_print_error(87-88)_print_success(90-91)
tests/test_interpreter.py (1)
tests/test_graceful_degradation.py (1)
cache(28-31)
cortex/dashboard.py (3)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(77-82)tests/test_dashboard.py (1)
main(116-147)
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (13)
cortex/hardware_detection.py (6)
15-17: LGTM!Standard library imports for
platformandshutilare appropriate additions for the cross-platform fallbacks implemented below.
328-336: LGTM!The exception handling changes from bare
except:toexcept Exception:are appropriate improvements that follow Python best practices.
341-375: LGTM!Good refactor to cache the
_uname()result in a local variable for reuse. The.machineattribute is available in bothos.uname()andplatform.uname()return values.
571-579: LGTM!Exception handling improvements are consistent with the rest of the file.
628-640: LGTM!Good cross-platform enhancement. The
os.statvfscheck withshutil.disk_usagefallback ensures this works on platforms wherestatvfsisn't available. The implementation correctly handles both paths.
596-597: LGTM!Consistent exception handling improvement.
cortex/config_manager.py (3)
75-77: LGTM: Clear documentation of platform-specific behavior.The updated docstring clearly explains why ownership enforcement is skipped on non-POSIX platforms and restricted runtimes, improving maintainability.
328-328: LGTM: Explicit type annotation improves clarity.The explicit
dict[str, Any]type hint makes the variable's type clear for both type checkers and developers.
811-812: LGTM: Defensive guard prevents errors when sandbox unavailable.The fallback to
_install_directwhensandbox_executorisNoneprevents AttributeError and ensures installation can proceed without sandboxing.tests/test_cli.py (1)
47-60: LGTM: Tests correctly updated for centralized printing API.The tests now verify that
cx_printis called with the correct message and status tags, properly aligning with the production code's use of the centralized printing helper.cortex/dashboard.py (3)
201-202: Error handling allows graceful degradation.The generic exception handlers in monitoring loops appropriately prevent crashes while logging errors. This design allows the dashboard to continue operating even when individual metric collections fail.
Also applies to: 252-252, 857-858
825-845: Terminal handling is robust with proper cleanup.The terminal setup/restore logic properly:
- Saves original settings before modification
- Handles platforms that don't support terminal control
- Logs failures for debugging
- Ensures cleanup in finally block
Also applies to: 888-893
641-641: Daemon threads ensure clean shutdown.All background threads properly use
daemon=True, ensuring they don't prevent the dashboard from exiting when the user quits.Also applies to: 707-707, 799-799, 862-862
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/hardware_detection.py (1)
626-638: Replace bareexcept:withexcept Exception:.Line 637 uses a bare
except:clause, which catchesSystemExitandKeyboardInterruptunexpectedly. This is inconsistent with the other exception handling changes in this file.🔎 Proposed fix
def _get_disk_free_gb(self) -> float: """Quick disk free space on root.""" try: statvfs_fn = getattr(os, "statvfs", None) if callable(statvfs_fn): statvfs = statvfs_fn("/") return round((statvfs.f_frsize * statvfs.f_bavail) / (1024**3), 1) root_path = os.path.abspath(os.sep) _total, _used, free = shutil.disk_usage(root_path) return round(free / (1024**3), 1) - except: + except Exception: return 0.0cortex/cli.py (1)
790-804: Adddashboardcommand to help table.The
dashboardcommand is missing from the Rich help output table. Users won't discover this feature fromcortexhelp.🔎 Proposed fix
table.add_row("demo", "See Cortex in action") table.add_row("wizard", "Configure API key") table.add_row("status", "System status") + table.add_row("dashboard", "Real-time system monitoring") table.add_row("install <pkg>", "Install software") table.add_row("history", "View history")
♻️ Duplicate comments (1)
cortex/hardware_detection.py (1)
196-201: Attribute mismatch betweenos.uname()andplatform.uname()still exists.The previous review flagged that
os.uname()returns.nodenamewhileplatform.uname()uses.node. This will cause anAttributeErroron systems whereos.unameis unavailable (e.g., Windows). The suggested fix using a wrapper namedtuple should be applied.
🧹 Nitpick comments (1)
cortex/cli.py (1)
851-854: Consider removing unused variable assignment.The
dashboard_parservariable is assigned but never used since no additional arguments are configured for this subcommand. This is a minor style issue.🔎 Proposed fix
# Dashboard command - dashboard_parser = subparsers.add_parser( - "dashboard", help="Real-time system monitoring dashboard" - ) + subparsers.add_parser("dashboard", help="Real-time system monitoring dashboard")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/cli.pycortex/hardware_detection.pytests/test_cli.pytests/test_interpreter.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_interpreter.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli.pycortex/hardware_detection.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (1)
cortex/cli.py (3)
_print_status(80-89)_print_error(91-92)_print_success(94-95)
cortex/cli.py (2)
cortex/branding.py (2)
show_banner(35-46)cx_print(49-69)cortex/dashboard.py (3)
DashboardApp(896-931)run(866-893)run(915-931)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 750-750: Ruff: F811 Redefinition of unused demo from line 177.
🪛 GitHub Check: lint
cortex/cli.py
[failure] 750-750: Ruff (F811)
cortex/cli.py:750:9: F811 Redefinition of unused demo from line 177
🪛 GitHub Check: Lint
cortex/cli.py
[failure] 750-750: Ruff (F811)
cortex/cli.py:750:9: F811 Redefinition of unused demo from line 177
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (5)
cortex/hardware_detection.py (2)
408-432: Well-structured GPU parsing helper.The new
_parse_lspci_gpu_linemethod cleanly extracts GPU parsing logic into a testable, reusable unit. The vendor detection and PCI ID extraction look correct.
312-312: Exception handling improvements look good.Replacing bare
except:clauses withexcept Exception:is a good practice - it avoids catchingSystemExit,KeyboardInterrupt, andGeneratorExitunintentionally.Also applies to: 327-327, 334-334, 444-444, 569-569, 576-576, 594-594, 614-614, 623-623
tests/test_cli.py (1)
47-60: Tests correctly updated to verifycx_printintegration.The test patches align with the implementation in
cortex/cli.pywhere_print_status,_print_error, and_print_successnow delegate tocx_print. The assertions match the expected arguments.cortex/cli.py (2)
758-775: Dashboard method implementation looks reasonable.The error handling covers the key cases: missing dependencies, user interrupt, and unexpected errors. The user guidance for installing
psutil pynvmlonImportErroris helpful.One minor note: the docstring could mention the required dependencies for discoverability.
943-944: Dashboard command routing is correctly wired.The dispatch logic follows the established pattern for other CLI commands.
ab01a21 to
84c1be7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (8)
cortex/hardware_detection.py (1)
198-203: Attribute mismatch will cause AttributeError on non-Unix systems.The
_uname()helper returnsos.uname()(which has.nodename) orplatform.uname()(which has.node), but Line 321 accesses.nodename. This will fail withAttributeErrorwhenos.unameis unavailable.This issue was previously flagged and remains unresolved. The suggested fix using a wrapper namedtuple would normalize the attributes across platforms.
cortex/dashboard.py (5)
155-163: Add NVML cleanup to prevent resource leak.The GPU monitoring library is initialized with
pynvml.nvmlInit()but never shut down withpynvml.nvmlShutdown(). While not critical, adding cleanup is good practice.Consider adding a shutdown method to
SystemMonitorand calling it fromDashboardAppcleanup (around line 930).
604-641: Consider thread synchronization for shared state flags.The flags
bench_running,doctor_running, andinstallation_progressare accessed from both the main thread and background threads without explicit synchronization. While Python's GIL provides some protection, state transitions could be inconsistent.For a UI dashboard, this is likely acceptable. If you observe state inconsistencies, consider using
threading.Lockorthreading.Eventfor these flags.Also applies to: 643-707
26-27: Improve error messages for missing dependencies.The import error messages include raw exception details that may confuse users. Consider providing clearer, actionable guidance.
🔎 Proposed improvement
except ImportError as e: - raise ImportError(f"rich library required: {e}. Install with: pip install rich") + print("Error: The 'rich' library is required but not installed.", file=sys.stderr) + print("Please install it with: pip install rich>=13.0.0", file=sys.stderr) + sys.exit(1)Apply similar changes for the psutil import error.
Also applies to: 31-32
621-641: Fix cancellation logic to preserve FAILED state.The background threads in
run_bench()andrun_doctor()unconditionally setinstallation_progress.state = COMPLETEDeven when the user cancels (which sets state toFAILED). This causes cancelled operations to display as "Completed."Before setting state to
COMPLETED, check ifinstallation_progress.state == InstallationState.FAILED. If so, skip updating the state and success message, keeping the cancellation as the source of truth.Also applies to: 659-707
669-671: Use platform-aware disk path for cross-platform compatibility.The hardcoded path
"/"for disk usage checks may not exist on Windows systems (typically"C:\\").Consider using
os.path.abspath(os.sep)or checkingplatform.system()to determine the appropriate root path, or wrap in a try-except to handle gracefully.requirements-dev.txt (1)
12-14: Verify dependency duplication between requirements files.The dependencies
psutil>=5.0.0andnvidia-ml-py>=12.0.0may be duplicated in bothrequirements.txtandrequirements-dev.txt, creating a maintenance burden. Typically, development dependencies should reference the main requirements via-r requirements.txt, or these should be moved exclusively to one file.Run the following script to verify if these dependencies are duplicated:
#!/bin/bash # Check if psutil and nvidia-ml-py appear in both requirements files echo "=== Checking requirements.txt ===" grep -i "psutil\|nvidia-ml-py\|pynvml" requirements.txt echo "" echo "=== Checking requirements-dev.txt ===" grep -i "psutil\|nvidia-ml-py\|pynvml" requirements-dev.txtdocs/DASHBOARD_IMPLEMENTATION.md (1)
104-104: Update test suite line count for accuracy.The test suite is described as "200+ lines" but the actual file
tests/test_dashboard.pyis approximately 148 lines. Consider updating for accuracy or removing the line count to avoid future drift.
🧹 Nitpick comments (2)
cortex/dashboard.py (2)
173-173: Extract magic numbers as named constants for better maintainability.Multiple hardcoded values appear throughout the code that would benefit from being defined as class constants:
- UI dimensions: bar width (20), process name length (20), max processes (8), command length (50/60), history count (5)
- Thresholds: resource usage levels (75%, 50%)
- Timing: CPU interval (0.1s), step delays (0.5-0.8s), monitor loop (1.0s), UI refresh (0.1s), startup delay (1s)
- Limits: max input length (50), max libraries display (5), max processes (15)
Consider defining these as class-level constants (e.g.,
BAR_WIDTH = 20,WARNING_THRESHOLD = 50,MONITOR_INTERVAL = 1.0) to make the code more maintainable and document the purpose of these values.Also applies to: 330-330, 337-340, 382-382, 393-393, 453-453, 586-586, 633-633, 692-692, 790-790, 859-859, 876-876, 886-886, 921-921
173-173: Consider non-blocking CPU percentage calls after initialization.
psutil.cpu_percent(interval=0.1)blocks for 0.1 seconds on each call. Since metrics are updated at 1Hz, this adds ~10% overhead. After the first call, usinginterval=Nonewould provide non-blocking updates.🔎 Proposed optimization
def update_metrics(self): """Update all metrics""" try: + # First call initializes, subsequent calls with interval=None are non-blocking + if not hasattr(self, '_cpu_initialized'): + psutil.cpu_percent(interval=0.1) + self._cpu_initialized = True + - cpu_percent = psutil.cpu_percent(interval=0.1) + cpu_percent = psutil.cpu_percent(interval=None) vm = psutil.virtual_memory()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
cortex/cli.pycortex/config_manager.pycortex/dashboard.pycortex/hardware_detection.pydocs/DASHBOARD_IMPLEMENTATION.mdrequirements-dev.txtrequirements.txttests/test_cli.pytests/test_dashboard.pytests/test_interpreter.py
🚧 Files skipped from review as they are similar to previous changes (5)
- cortex/config_manager.py
- tests/test_dashboard.py
- cortex/cli.py
- tests/test_interpreter.py
- requirements.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_cli.pycortex/hardware_detection.pycortex/dashboard.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (2)
cortex/cli.py (3)
_print_status(74-83)_print_error(85-86)_print_success(88-89)cortex/first_run_wizard.py (1)
_print_error(746-748)
cortex/dashboard.py (3)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)tests/test_dashboard.py (1)
main(116-147)
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (4)
tests/test_cli.py (1)
46-59: LGTM! Test harness properly updated to use cx_print API.The test patches correctly target the new
cortex.cli.cx_printfunction and verify the expected arguments. This aligns with the CLI refactoring to use Rich-based printing.cortex/hardware_detection.py (2)
322-322: LGTM! Exception handling made more explicit.Replacing bare
except:clauses withexcept Exception:is a best practice that improves code clarity and avoids catching system-exit exceptions likeSystemExitandKeyboardInterrupt.Also applies to: 338-338, 345-345, 395-395, 456-456, 581-581, 588-588, 606-606, 626-626, 635-635, 649-649
420-444: LGTM! Good refactoring to consolidate GPU parsing logic.The new
_parse_lspci_gpu_linehelper properly separates GPU line parsing from the main detection flow. The logic correctly filters GPU-related lines, extracts PCI IDs, and handles multiple vendors (NVIDIA, AMD/ATI, Intel).cortex/dashboard.py (1)
8-8: Revise unused imports list - several items don't exist in the file.The file does contain unused imports that should be removed:
queue(line 8)Callablefrom collections.abc (line 13)Tablefrom rich.table (line 24)However, the following imports mentioned in the original comment do not exist in this file and should not be listed:
Tuple(no Tuple import exists on line 13 or elsewhere)Layout,Progress,BarColumn,TextColumn,DownloadColumn(none of these are imported on lines 19-24)The actual imports on lines 19-25 are:
ROUNDED,Columns,Console,Group,Live,Panel,Table, andText.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Sahilbhatane Resolve Conflicts & update your branch.
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
Verified Signers
This check runs automatically. Maintainers can update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
cortex/cli.py (1)
1643-1646: Remove unused variable assignment.The
dashboard_parservariable is assigned but never used. Similar simple commands likedemoandwizardshow the same pattern (demo_parser at line 1641 is also unused). Consider not storing the parser when no additional configuration is needed.🔎 Proposed fix
- # Dashboard command - dashboard_parser = subparsers.add_parser( - "dashboard", help="Real-time system monitoring dashboard" - ) + # Dashboard command + subparsers.add_parser("dashboard", help="Real-time system monitoring dashboard")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pytests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pytests/test_cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.py
🧬 Code graph analysis (2)
cortex/cli.py (3)
cortex/dashboard.py (3)
DashboardApp(896-931)run(866-893)run(915-931)cortex/first_run_wizard.py (2)
run(169-228)_print_error(746-748)cortex/branding.py (1)
cx_print(49-69)
tests/test_cli.py (1)
cortex/cli.py (3)
_print_status(79-88)_print_error(90-91)_print_success(93-94)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (3)
tests/test_cli.py (1)
46-59: LGTM! Test updates align correctly with the new cx_print API.The test updates correctly mock the new
cx_printfunction and verify that the CLI methods (_print_status,_print_error,_print_success) invoke it with the expected arguments. The assertions match the implementation in cortex/cli.py.cortex/cli.py (2)
1291-1307: Comprehensive error handling with user-friendly messages.The implementation correctly handles optional dependencies with lazy imports, provides clear installation guidance for missing packages, and gracefully handles interrupts. The error handling pattern is consistent with the rest of the codebase.
1897-1898: Correct routing for the dashboard command.The command routing correctly dispatches to
cli.dashboard()when the user runscortex dashboard.
b21d1a8 to
c171d8f
Compare
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is Loaded models ?
@Sahilbhatane Added some comments.
cortex/dashboard.py
Outdated
| """Lists running inference processes""" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Sahilbhatane Address this one.
cortex/dashboard.py
Outdated
| continue | ||
|
|
||
| with self.lock: | ||
| self.processes = processes[:15] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Sahilbhatane Use constants, don't use magic numbers.
cortex/dashboard.py
Outdated
| self.bench_status = "Ready to run benchmark" | ||
| self.bench_running = False | ||
|
|
||
| def _create_bar(self, label: str, percent: float, width: int = 20) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Sahilbhatane Never use magic Numbers.
| bar = "[green]" + "█" * filled + "[/green]" + "░" * (width - filled) | ||
| if percent > 75: | ||
| bar = "[red]" + "█" * filled + "[/red]" + "░" * (width - filled) | ||
| elif percent > 50: | ||
| bar = "[yellow]" + "█" * filled + "[/yellow]" + "░" * (width - filled) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
cortex/dashboard.py
Outdated
| if not cmds: | ||
| content = "[dim]No history[/dim]" | ||
| else: | ||
| lines = [f" {c[:50]}" for c in reversed(list(cmds)[-5:])] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
requirements.txt (1)
9-9: Remove duplicate PyYAML entries.PyYAML appears three times in the requirements file with conflicting version constraints:
- Line 9:
PyYAML>=6.0.0- Line 21:
pyyaml>=6.0.0(lowercase)- Line 25:
PyYAML==6.0.3(exact version)Pip treats these as the same package regardless of casing, but the conflicting constraints (>= vs ==) can cause installation issues or unpredictable version resolution. Consolidate to a single entry with a consistent version constraint.
🔎 Proposed fix
# Configuration -PyYAML>=6.0.0 +PyYAML>=6.0.0,<7.0.0 # Environment variable loading from .env files python-dotenv>=1.0.0 # Encryption for environment variable secrets cryptography>=42.0.0 # Terminal UI rich>=13.0.0 -# Configuration -pyyaml>=6.0.0 - # Type hints for older Python versions typing-extensions>=4.0.0 -PyYAML==6.0.3 # System monitoring (for dashboard) psutil>=5.0.0 nvidia-ml-py>=12.0.0Also applies to: 21-21, 25-25
♻️ Duplicate comments (3)
cortex/hardware_detection.py (1)
198-203: Attribute mismatch betweenos.uname()andplatform.uname()remains unresolved.As flagged in a previous review,
os.uname()returns a result with.nodename, butplatform.uname()uses.node. Whenos.unameis unavailable, calling.nodenameat line 321 will cause anAttributeError. Please normalize the return value to ensure consistent attribute names across both code paths.docs/DASHBOARD_IMPLEMENTATION.md (1)
104-104: Line count claim remains inaccurate.As noted in a previous review, the test suite line count is documented as "200+ lines" but
tests/test_dashboard.pyis approximately 148 lines. Please update for accuracy.cortex/cli.py (1)
1526-1544: Add return type annotation to dashboard method.The method signature is missing a return type annotation, which is required per coding guidelines.
🔎 Proposed fix
- def dashboard(self): + def dashboard(self) -> int:Based on coding guidelines: "Type hints required in Python code"
🧹 Nitpick comments (4)
docs/DASHBOARD_IMPLEMENTATION.md (2)
51-51: Add language identifiers to fenced code blocks.Multiple fenced code blocks throughout the document lack language specifiers, which reduces syntax highlighting and code readability. Consider adding appropriate identifiers (e.g.,
text,plaintext, orconsole) to the code blocks at lines 51, 93, 121, 129, 275, 296, 316, 338, 357, 370, and 706.Also applies to: 93-93, 121-121, 129-129, 275-275, 296-296, 316-316, 338-338, 357-357, 370-370, 706-706
731-733: Format bare URLs as Markdown links.The external documentation URLs should be formatted as proper Markdown links for better readability and accessibility.
🔎 Proposed formatting
-- **Rich Library:** https://rich.readthedocs.io/ -- **psutil:** https://psutil.readthedocs.io/ -- **NVIDIA NVML (pynvml):** https://docs.nvidia.com/cuda/nvml-api/ +- **Rich Library:** [https://rich.readthedocs.io/](https://rich.readthedocs.io/) +- **psutil:** [https://psutil.readthedocs.io/](https://psutil.readthedocs.io/) +- **NVIDIA NVML (pynvml):** [https://docs.nvidia.com/cuda/nvml-api/](https://docs.nvidia.com/cuda/nvml-api/)tests/test_dashboard.py (1)
38-293: Consider expanding test coverage for interactive features.The current tests provide good coverage for initialization and basic data flow. However, as noted in past reviews, the following areas could benefit from additional tests in future iterations:
- Keyboard input handling: Tab switching, action keys (1-4), input mode (Enter/Escape/Backspace)
- Interactive workflows: Full installation flow, benchmark execution, doctor checks, operation cancellation
- UI rendering validation: Verify rendered components contain expected content (rather than just checking they're non-None)
Note: This is not blocking for the current PR, but would improve test robustness for future changes. Based on learnings: "Maintain >80% test coverage for pull requests"
cortex/dashboard.py (1)
928-1122: Good: Background operations properly check cancellation state.The benchmark and doctor implementations correctly check for cancellation before marking operations as
COMPLETED(lines 986, 1107), ensuring that_cancel_operation()remains the single source of truth for cancelled runs.Note on concurrency: As noted in past reviews,
bench_runninganddoctor_runningflags are accessed without locks from background threads. While Python's GIL provides basic protection for boolean flag reads/writes, this is generally acceptable for a UI dashboard use case where slight timing inconsistencies don't impact correctness. For stricter guarantees, consider usingthreading.Eventinstead of boolean flags in future refactoring.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.gitignorecortex/cli.pycortex/config_manager.pycortex/dashboard.pycortex/hardware_detection.pydocs/DASHBOARD_IMPLEMENTATION.mdrequirements-dev.txtrequirements.txttests/test_cli.pytests/test_dashboard.pytests/test_interpreter.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_interpreter.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pytests/test_cli.pytests/test_dashboard.pycortex/config_manager.pycortex/hardware_detection.pycortex/dashboard.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.pytests/test_dashboard.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to tests/**/*.py : Maintain >80% test coverage for pull requests
Applied to files:
tests/test_dashboard.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
requirements.txt
🧬 Code graph analysis (4)
cortex/cli.py (1)
cortex/dashboard.py (2)
run(1402-1482)run(1514-1537)
tests/test_cli.py (2)
cortex/cli.py (3)
_print_status(79-88)_print_error(90-91)_print_success(93-94)cortex/first_run_wizard.py (1)
_print_error(746-748)
tests/test_dashboard.py (1)
cortex/dashboard.py (22)
CommandHistory(514-568)DashboardApp(1489-1537)DashboardTab(126-129)InstallationProgress(186-203)InstallationState(132-139)ModelLister(416-507)ProcessLister(332-409)SystemMetrics(170-182)SystemMonitor(221-325)UIRenderer(575-1482)get_metrics(280-283)enable_monitoring(254-256)update_metrics(285-325)get_processes(406-409)enable(371-373)enable(432-434)update_processes(375-404)get_models(482-485)check_ollama(436-449)update_models(451-480)update_elapsed(200-203)main(1540-1543)
cortex/dashboard.py (2)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)
🪛 GitHub Actions: CI
tests/test_dashboard.py
[error] 61-61: W293 Blank line contains whitespace
🪛 GitHub Check: lint
tests/test_dashboard.py
[failure] 288-288: Ruff (W293)
tests/test_dashboard.py:288:1: W293 Blank line contains whitespace
[failure] 278-278: Ruff (W293)
tests/test_dashboard.py:278:1: W293 Blank line contains whitespace
[failure] 61-61: Ruff (W293)
tests/test_dashboard.py:61:1: W293 Blank line contains whitespace
🪛 GitHub Check: Lint
tests/test_dashboard.py
[failure] 288-288: Ruff (W293)
tests/test_dashboard.py:288:1: W293 Blank line contains whitespace
[failure] 278-278: Ruff (W293)
tests/test_dashboard.py:278:1: W293 Blank line contains whitespace
[failure] 61-61: Ruff (W293)
tests/test_dashboard.py:61:1: W293 Blank line contains whitespace
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (17)
.gitignore (1)
121-121: LGTM!The addition of
.cursor/to the IDEs and editors section is appropriate and follows standard practices for ignoring IDE-specific directories.cortex/config_manager.py (4)
77-79: LGTM!The clarified comments accurately document that ownership APIs (getuid/getgid/chown) are POSIX-only and unavailable on Windows or restricted runtimes, improving code maintainability.
332-332: LGTM!The explicit type annotation
config: dict[str, Any]improves type safety and follows the project's coding guidelines requiring type hints in Python code.
815-816: LGTM!The early fallback to
_install_directwhensandbox_executorisNoneprovides a clear execution path and maintains backward compatibility when sandboxing is unavailable.
464-466: No action needed—the behavior is sound and correctly implemented.The code intentionally treats packages without a specified version as "upgrade" requests (as documented in the line comment), which makes sense for packages already installed. The
current_versionis correctly propagated in the returned dictionary via{**pkg, "current_version": current_version}, and the docstring explicitly documents this behavior. Downstream processing inapply_configreceives the properly formed package data.cortex/hardware_detection.py (2)
420-444: LGTM!The new
_parse_lspci_gpu_linemethod effectively consolidates GPU parsing logic, extracting vendor, model, and PCI ID from lspci output. The vendor-specific detection (NVIDIA, AMD/ATI, Intel) with proper flag updates is well-structured.
322-322: LGTM!The exception handling improvements throughout the file—replacing bare
exceptwith explicitExceptionhandling and adding corresponding logging—follow best practices and improve debuggability.Also applies to: 338-338, 345-345, 386-386, 395-396, 409-410, 456-457, 488-489, 502-503, 518-519, 548-549, 581-582, 588-589, 606-607, 626-627, 635-636, 649-650
requirements.txt (1)
27-29: LGTM!The addition of
psutil>=5.0.0andnvidia-ml-py>=12.0.0appropriately supports the new dashboard feature's system monitoring and GPU detection capabilities.tests/test_cli.py (1)
46-59: LGTM! Tests correctly updated to match refactored print methods.The test updates properly reflect the migration from direct stdout/stderr output to the
cx_printwrapper. The assertions validate the correct mapping of status types ("thinking","error","success") and message formatting.cortex/dashboard.py (7)
72-163: Excellent: Centralized constants and configuration.The constants section is well-organized with clear grouping (UI Display, Resource Thresholds, Timing, etc.) and comprehensive documentation. The centralized
ACTION_MAPis a good design pattern that makes action handling maintainable and extensible.This addresses previous concerns about magic numbers scattered throughout the code.
221-326: LGTM! SystemMonitor implements robust thread-safe monitoring.The implementation demonstrates several strong patterns:
- Explicit-intent design: No automatic data collection; monitoring must be explicitly enabled
- Resource management: Proper GPU initialization/shutdown lifecycle
- Thread-safety: Correct use of locks for metric access
- Performance: Non-blocking CPU sampling after initialization
This addresses all concerns from previous reviews regarding docstrings, resource leaks, and the monitoring pattern.
332-410: LGTM! ProcessLister respects user privacy.The implementation correctly:
- Enforces explicit-intent pattern (no automatic enumeration)
- Protects privacy by collecting only PID and process name (NOT cmdline)
- Documents privacy considerations clearly in docstring
- Uses thread-safe access patterns
416-569: LGTM! ModelLister and CommandHistory follow established patterns.Both classes correctly implement:
- Explicit-intent pattern (no automatic data collection)
- Thread-safe access with locks
- Graceful handling of optional dependencies
- Clear documentation
575-877: LGTM! Clean modular rendering implementation.The rendering methods are well-structured with:
- Clear separation of concerns (each render method handles one UI component)
- Consistent use of constants for thresholds and display limits
- Proper handling of placeholder states when monitoring is not yet enabled
- Thread-safe state access
1196-1483: LGTM! Robust CLI integration and terminal management.The implementation demonstrates solid engineering:
CLI Integration (lines 1204-1323):
- Correctly integrates with
CortexCLI.install()usingdry_run=Truefor safe planning- Comprehensive error handling for API keys, missing dependencies, and CLI errors
- Captures and parses CLI output to provide user-friendly error messages
Terminal Management (lines 1402-1483):
- Cross-platform keyboard input handling (Windows vs Unix)
- Proper terminal state save/restore with atexit safety net
- Clean shutdown on KeyboardInterrupt
- Background monitor thread with proper lifecycle
1489-1547: LGTM! Clean application entry point with proper lifecycle.The
DashboardAppclass:
- Has comprehensive documentation (addresses past review comments)
- Properly coordinates all dashboard components
- Ensures GPU cleanup in finally block
- Returns appropriate exit codes
- Provides clear user messaging
requirements-dev.txt (1)
11-14: System monitoring dependencies are compatible with dashboard requirements.The
psutil>=5.0.0andnvidia-ml-py>=12.0.0versions support all APIs used by the dashboard: psutil providescpu_percent(),virtual_memory(),process_iter(),cpu_count(),cpu_freq(),disk_usage(), and the necessary exception types; nvidia-ml-py providesnvmlInit(),nvmlShutdown(),nvmlDeviceGetCount(),nvmlDeviceGetHandleByIndex(),nvmlDeviceGetUtilizationRates(), andnvmlDeviceGetMemoryInfo()for GPU monitoring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
tests/test_dashboard.py (2)
225-270: Missing test coverage: Keyboard input handling and UI interactions.Past reviews noted missing tests for keyboard input handling and tab switching functionality, which are core interactive features of the dashboard. The
UIRenderertests verify that rendering methods exist but don't test the keyboard handling logic in_handle_key_press(lines 900-936 in dashboard.py).Consider adding tests for:
- Tab key switching between HOME and PROGRESS tabs
- Number keys (1-4) triggering actions via ACTION_MAP
- Input mode handling (Enter, Escape, Backspace)
- Character input validation and MAX_INPUT_LENGTH enforcement
🔎 Example test implementations
def test_tab_switching(self): """Tab key should cycle through dashboard tabs.""" self.ui.current_tab = DashboardTab.HOME self.ui._handle_key_press("\t") self.assertEqual(self.ui.current_tab, DashboardTab.PROGRESS) self.ui._handle_key_press("\t") self.assertEqual(self.ui.current_tab, DashboardTab.HOME) def test_action_key_triggering(self): """Number keys 1-4 should trigger corresponding actions.""" # Press "2" should start bench (with mocking) with patch.object(self.ui, '_start_bench') as mock_bench: self.ui._handle_key_press("2") mock_bench.assert_called_once() def test_input_mode_character_handling(self): """Input mode should accept printable characters up to MAX_INPUT_LENGTH.""" self.ui.input_active = True self.ui.input_text = "" self.ui._handle_key_press("a") self.assertEqual(self.ui.input_text, "a") # Test max length enforcement self.ui.input_text = "x" * MAX_INPUT_LENGTH self.ui._handle_key_press("y") self.assertEqual(len(self.ui.input_text), MAX_INPUT_LENGTH)Based on learnings: Maintain >80% test coverage for pull requests.
275-293: Missing integration test coverage for action workflows.Past reviews noted missing test coverage for core interactive features: installation flow, benchmark execution, doctor checks, and operation cancellation. These are key features mentioned in the PR objectives but have no test coverage.
While unit tests verify component initialization, integration tests should verify:
- Complete installation workflow (
_start_installation→_submit_installation_input→_execute_cli_install)- Benchmark execution flow (
_start_bench→ background thread → result display)- Doctor checks flow (
_start_doctor→ checks execution → result display)- Cancellation during operations (
_cancel_operationinterrupting running operations)🔎 Example integration test structure
@patch('cortex.dashboard.CortexCLI') def test_installation_workflow_integration(self, mock_cli_class): """Test complete installation workflow from input to completion.""" mock_cli = MagicMock() mock_cli.install.return_value = 0 mock_cli_class.return_value = mock_cli app = DashboardApp() app.ui._start_installation() self.assertEqual(app.ui.installation_progress.state, InstallationState.WAITING_INPUT) app.ui.input_text = "nginx" app.ui._submit_installation_input() # Wait for background thread time.sleep(0.5) self.assertEqual(app.ui.installation_progress.package, "nginx") def test_benchmark_execution_completes(self): """Test benchmark runs all steps and marks completion.""" app = DashboardApp() app.ui._start_bench() # Wait for background thread time.sleep(5) self.assertEqual(app.ui.installation_progress.state, InstallationState.COMPLETED) self.assertGreater(len(app.ui.doctor_results), 0)Based on learnings: Maintain >80% test coverage for pull requests.
cortex/dashboard.py (5)
557-566: Consider logging decode errors in history file reading.While
errors='ignore'makes the code robust, a past comment suggested logging when decoding errors occur to help identify potential issues. The current implementation silently skips errors which could mask corrupted or malicious data.🔎 Enhanced error handling
try: - with open(history_file, encoding="utf-8", errors="ignore") as f: + # Track if we encountered decode errors + decode_errors = False + with open(history_file, encoding="utf-8", errors="replace") as f: for line in f.readlines()[-self.max_size :]: + if '\ufffd' in line: # Unicode replacement character + decode_errors = True + continue cmd = line.strip() if cmd and not cmd.startswith(":"): self.history.append(cmd) + if decode_errors: + logger.debug(f"Skipped lines with encoding errors in {history_file}") self._loaded = True break
604-605: Thread safety consideration: State access patterns.While the
state_lockis used in action handlers, a past comment noted potential race conditions on shared state (bench_running,doctor_running,installation_progress) accessed from both main and background threads. The current implementation uses locks in action handlers but background threads read these flags without locks.For this UI dashboard, the current approach is likely acceptable due to Python's GIL providing basic protection. However, if state inconsistencies are observed, consider using
threading.Eventfor cancellation signals.🔎 More robust thread synchronization (if needed)
# Instead of boolean flags, use Events for clearer semantics self.bench_stop_event = threading.Event() self.doctor_stop_event = threading.Event() # In background threads: if self.bench_stop_event.is_set() or not self.running: break # In _cancel_operation: self.bench_stop_event.set() self.doctor_stop_event.set()
1428-1429: Silent exception handling in terminal setup.The exception handler when setting terminal attributes has no comment or logging, which a past comment flagged. While the current approach is reasonable (not all environments support tty operations), a debug log would help troubleshooting.
try: old_settings = termios.tcgetattr(sys.stdin) tty.setcbreak(sys.stdin.fileno()) except Exception as e: + # It's safe to ignore - not all environments support tty operations logger.debug(f"Failed to set terminal attributes: {e}")
114-115: Clarify SIMULATION_MODE purpose and removal plan.The
SIMULATION_MODEflag is set toFalsewith a TODO comment, but the code still has both simulation and real installation paths. This creates maintenance overhead with dual codepaths.Consider removing the simulation code entirely since the real CLI integration is now implemented (
_execute_cli_install), or clearly document when simulation mode should be used (e.g., for testing without API keys).-# Simulation Mode - Set to False when real CLI integration is ready -# TODO: Replace simulated installation with actual CLI calls -SIMULATION_MODE = False +# Note: _simulate_installation() and _run_installation() are kept for testing +# without API keys. The default path uses real CLI integration.
1213-1337: Complex installation execution merits inline documentation.The
_execute_cli_install()method handles API key validation, CLI initialization, output capture, result parsing, and error message extraction. While functionally sound, the error message cleanup logic (lines 1307-1327) is complex and could benefit from inline comments explaining the regex patterns and message extraction strategy.🔎 Enhanced inline documentation
else: progress.state = InstallationState.FAILED - # Try to extract meaningful error from output + # Extract meaningful error from CLI output + # 1. Prefer stderr, fall back to stdout error_msg = stderr_output.strip() or stdout_output.strip() - # Remove Rich formatting characters for cleaner display + + # 2. Clean Rich formatting tags like [red], [bold], etc. import re - - clean_msg = re.sub(r"\[.*?\]", "", error_msg) # Remove [color] tags - clean_msg = re.sub(r" CX.*?[│✗✓⠋]", "", clean_msg) # Remove CX prefix + clean_msg = re.sub(r"\[.*?\]", "", error_msg) + # 3. Remove Cortex CLI spinner/status prefixes + clean_msg = re.sub(r" CX.*?[│✗✓⠋]", "", clean_msg) clean_msg = clean_msg.strip().github/workflows/automation.yml (1)
25-25: Update setup-python action to v6 or latest available version.The workflow file uses
setup-python@v5, but version 6.1.0 is currently available with improvements since v5.6.0 (April 2025). Consider upgrading to v6 for access to recent improvements and bug fixes. This affects all three workflow jobs using this action (lines 25, 67, 96).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/automation.yml.github/workflows/ci.yml.github/workflows/cla-check.ymlcortex/dashboard.pypyproject.tomltests/test_dashboard.py
💤 Files with no reviewable changes (1)
- .github/workflows/cla-check.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_dashboard.pycortex/dashboard.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_dashboard.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to tests/**/*.py : Maintain >80% test coverage for pull requests
Applied to files:
tests/test_dashboard.py
🔇 Additional comments (15)
.github/workflows/ci.yml (1)
35-35: LGTM: Consistent environment exclusion.The addition of
myenvto the Black exclude list aligns with similar changes inpyproject.tomland.github/workflows/automation.yml, ensuring the new environment directory is consistently excluded from formatting checks..github/workflows/automation.yml (5)
29-36: LGTM: Pip caching improves build performance.The addition of pip package caching will speed up subsequent workflow runs by reusing downloaded packages. The cache key strategy using requirements file hashes is appropriate.
42-42: LGTM: Dev requirements installation.Adding
requirements-dev.txtinstallation ensures development dependencies are available for testing, which is appropriate for this PR's dashboard testing needs.
51-51: LGTM: Integration test exclusion.Excluding
tests/integrationis reasonable for unit test runs in CI, as integration tests typically require additional setup or external services.
82-82: LGTM: Consistent environment exclusion.The addition of
myenvto the Black exclude list is consistent with changes in other workflow files and configuration.
54-54: Update codecov-action to the latest version.The codecov-action is pinned to
@v4, but the latest available version is@v5.5.2. While this file was newly created rather than downgraded, using the latest version is recommended for bug fixes and improvements.Likely an incorrect or invalid review comment.
tests/test_dashboard.py (1)
1-362: Overall test structure is solid with good coverage of core components.The test file provides comprehensive unit test coverage for the dashboard module's classes and data structures. Tests follow the unittest pattern with clear organization and docstrings.
cortex/dashboard.py (8)
73-121: LGTM: Comprehensive constants section addresses past concerns.The centralized constants section (BAR_WIDTH, MAX_PROCESS_NAME_LENGTH, CRITICAL_THRESHOLD, timing values, etc.) effectively addresses numerous past review comments about magic numbers. All hardcoded values are now named constants with clear purposes.
231-252: LGTM: Well-documented SystemMonitor class.The comprehensive docstring addresses a past comment requesting class-level documentation, clearly explaining the threading model, explicit-intent pattern, and usage example.
281-289: LGTM: GPU cleanup method added.The
shutdown_gpu()method addresses a past comment about resource leaks from NVML initialization without cleanup. This is properly called inDashboardAppcleanup (line 1549).
343-359: LGTM: ProcessLister docstring added.The comprehensive docstring addresses a past comment requesting documentation, clearly explaining the explicit-intent pattern, privacy protections, and threading model.
522-534: LGTM: CommandHistory docstring added.The docstring addresses a past comment requesting documentation, explaining the explicit loading pattern and rolling buffer behavior.
1504-1515: LGTM: DashboardApp docstring added.The comprehensive docstring addresses a past comment requesting documentation for the main entry point, explaining the orchestration role and usage example.
1398-1414: Error handling improved but could be more specific.The keyboard input error handling now distinguishes between
OSErrorand unexpected exceptions with appropriate log levels, addressing a past comment about overly broad exception handling. Good improvement.
162-167: All handler methods referenced in ACTION_MAP (_start_installation,_start_bench,_start_doctor,_cancel_operation) are properly defined in the UIRenderer class.
Removed redundant import of CortexCLI before instantiation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/printer_setup.py (1)
344-407: Default to dry-run and log configuration actions.
setup_printerperformslpadminimmediately. Per guidelines, installation/configuration operations should default to dry-run and be audit-logged. Consider adding adry_run: bool = Trueflag and logging the intended command (with sensitive values redacted) to~/.cortex/history.db.✅ Suggested dry-run default
- def setup_printer( + def setup_printer( self, device: PrinterDevice, driver: DriverInfo | None = None, make_default: bool = False, + dry_run: bool = True, ) -> tuple[bool, str]: @@ - returncode, _, stderr = self._run_command(cmd) + if dry_run: + return True, f"Dry run: would run {' '.join(cmd)}" + + returncode, _, stderr = self._run_command(cmd)As per coding guidelines, use dry-run by default and log operations to
~/.cortex/history.db.cortex/gpu_manager.py (1)
195-216: Usenvidia-smi --query-gpu=pci.bus_idto get the correct PCI path for power state detection.The hard-coded path
/sys/bus/pci/devices/0000:01:00.0/power/runtime_statusassumes the GPU is at PCI address01:00.0, which won't match most systems. This causes incorrect or missingpower_statedata on different hardware configurations. Query the actual PCI bus ID from nvidia-smi and construct the sysfs path dynamically. This is especially important for multi-GPU systems where different GPUs may have different PCI addresses.
🤖 Fix all issues with AI agents
In @.github/workflows/autofix.yml:
- Around line 9-11: The workflow's permissions block currently sets "contents:
read" which prevents the autofix action from pushing changes; update the
permissions in the permissions block by changing "contents: read" to "contents:
write" and, if the job needs to create or update PRs, add "pull-requests: write"
as well so the autofix CI can commit and push fixes from the workflow.
In @.github/workflows/automation.yml:
- Around line 43-48: The workflow currently skips integration tests via the
"--ignore=tests/integration" flag in the "Run tests" step and leaves coverage
enforcement loose via the codecov/codecov-action; fix this by adding a dedicated
job (e.g., "Run integration tests") or a scheduled/manual workflow that runs
pytest without "--ignore=tests/integration" (or runs only tests/integration),
ensure it uses the same env setup as the "Run tests" step, and update the
codecov/codecov-action configuration to enforce a minimum coverage of 80% and
set fail_ci_if_error to true (or use the action's threshold/requirement
settings) so coverage below 80% fails the workflow.
In `@cortex/cli.py`:
- Around line 2830-2837: The dashboard() method currently ignores the integer
returned by DashboardApp.run() and always returns 0 on the success path; update
cortex.cli.dashboard to capture the result of DashboardApp.run() (e.g., rc =
app.run()) and return that rc so the caller receives the actual exit code from
DashboardApp.run(); keep the existing exception handling behavior unchanged and
only replace the fixed "return 0" with returning the run() result.
- Around line 2838-2842: Update the install hint in the ImportError handler so
it points to the real pip package that provides the pynvml module: change the
message emitted in the except block that calls self._print_error and the
subsequent cx_print lines (the handler around the ImportError caught in the CLI)
to recommend "pip install psutil nvidia-ml-py" (or similar wording that mentions
nvidia-ml-py provides the pynvml module) instead of "pip install psutil pynvml"
so users are directed to the correct package.
In `@cortex/config_manager.py`:
- Around line 815-817: The current early return into _install_direct when
sandbox_executor is None must be replaced with a guarded flow: keep sandboxed
installs as the default (do not call _install_direct automatically), perform a
dry-run by default, and only allow direct installs if the user has explicitly
opted in and confirmed privilege escalation; implement this by checking a new
explicit flag/confirmation (e.g., confirm_direct_install or a prompt) before
invoking _install_direct, require use of Firejail or equivalent when
sandbox_executor is unavailable unless the user opted in, and record all install
attempts (dry-run and real) to the audit DB (~/.cortex/history.db) including
whether the action was dry-run, sandboxed, or direct and any escalation consent;
update logic around sandbox_executor and the call sites that currently return
_install_direct to enforce these checks and audit entries.
In `@cortex/dashboard.py`:
- Around line 1532-1657: The method _execute_cli_install modifies
self.installation_progress from a background thread without acquiring
self.state_lock; wrap all places that set or update progress fields (e.g.,
progress.state, progress.start_time, progress.total_steps, progress.libraries,
progress.current_step, progress.current_library, progress.success_message,
progress.error_message and calls to progress.update_elapsed()) in with
self.state_lock: blocks following the same pattern used in
_execute_confirmed_install(); also protect reads/decisions that race with those
writes (checking progress.state) by holding the lock when needed so the UI
thread cannot observe partial updates.
- Around line 1308-1422: The _execute_dry_run background method mutates
self.installation_progress fields without using the shared self.state_lock,
causing a race with the UI reader; fix by wrapping every mutation/read of the
shared installation_progress (e.g., assignments to progress.state,
progress.start_time, progress.total_steps, progress.current_step,
progress.current_library, progress.update_elapsed(), progress.libraries,
progress.error_message, and self._pending_commands) in the state lock (use with
self.state_lock: around those blocks) and keep lock-held regions as small as
possible (acquire before updating/reading progress and release before
long-running operations like cli.install); ensure exceptions still set progress
state under the lock (references: _execute_dry_run, self.installation_progress,
self.state_lock, InstallationState, self._pending_commands).
In `@cortex/gpu_manager.py`:
- Around line 273-285: The current selection assigns state.active_gpu as soon as
device.is_active or a mode-specific vendor check matches, which can pick an
active device that doesn't match state.mode; change the logic in the active GPU
selection (the loop over state.devices) to prefer vendor matches for the current
state.mode first (use GPUMode -> GPUVendor mapping: GPUMode.NVIDIA ->
GPUVendor.NVIDIA, GPUMode.INTEGRATED -> {GPUVendor.INTEL, GPUVendor.AMD}) and
only if no vendor-matching device is found fall back to selecting a device where
device.is_active; ensure you still set state.active_gpu and break when a
preferred match is found.
In `@cortex/health_score.py`:
- Around line 476-480: The health-check JSON entry (the entry dict built in
cortex/health_score.py) is only being saved to disk as JSON; update the code
path that creates entry (and the save_history function or its caller) to also
write an audit record into the shared history DB (~/.cortex/history.db) or route
the same payload through the shared audit logger. Concretely: after the entry
dict is created, call the existing audit logger (e.g., audit_logger.log or
whatever centralized audit function exists) with the entry, or open
~/.cortex/history.db and insert a row (timestamp, overall_score, factors JSON)
into the history table, ensuring the original JSON file write remains. Use the
symbols entry and save_history (or the shared audit function name) so this
change is applied where the entry is constructed and persisted.
In `@cortex/licensing.py`:
- Around line 286-300: When activation succeeds (the block that checks if
data.get("success") and writes LICENSE_FILE), also append an audit entry to the
~/.cortex/history.db recording the operation "activate_license" (timestamped)
with the license metadata from data (tier, expires, organization, email) and the
activation result; do not store the raw license_key—either redact it (e.g.,
"****"+last4 of license_key) or store a hash (SHA256) instead. Use the project's
history logging helper if one exists (e.g., record_history / HistoryDB API),
otherwise open ~/.cortex/history.db and append the JSON entry; ensure this audit
write happens after the LICENSE_FILE.write_text succeeds so the record reflects
the actual mutation.
In `@cortex/systemd_helper.py`:
- Around line 496-499: After generating name from description (the block that
sets name = re.sub(..., description.lower())[:40] and strips dashes), add a
fallback when name is empty: if the regex/strip yields an empty string, assign a
safe default (e.g., "unnamed-service" or "service-" + uuid.uuid4().hex[:8]) to
avoid collisions, and ensure you import uuid if you use it; keep the same
sanitization (lowercase, no leading/trailing dashes) so the resulting name is
always non-empty and valid for the service file.
In `@docs/DASHBOARD_IMPLEMENTATION.md`:
- Around line 731-733: Replace the bare URLs in the references section by
converting each item into a Markdown link so markdownlint passes; specifically
update the three lines referencing "Rich Library", "psutil", and "NVIDIA NVML
(nvidia-ml-py)" to use the [text](url) format (e.g., change the plain URL after
"Rich Library:" to a Markdown link, and do the same for "psutil:" and "NVIDIA
NVML (nvidia-ml-py):").
- Around line 51-73: The fenced ASCII diagram block in
DASHBOARD_IMPLEMENTATION.md is missing a language identifier; update the opening
fence for the diagram (the block containing "DashboardApp (Main Orchestrator)",
"SystemMonitor", "ProcessLister", "CommandHistory", and "UIRenderer") to include
a language tag such as text (e.g., change ``` to ```text) so markdownlint stops
flagging it—ensure any other fenced blocks in the file also have appropriate
language identifiers (bash, python, text) where applicable.
In `@LICENSE`:
- Around line 1-56: This change replaces the project license with BSL 1.1 and
needs explicit legal/maintainer sign-off and coordinated metadata updates:
obtain and record approval from legal/maintainers for the LICENSE content and
the stated Change Date (January 15, 2030), update all package metadata and
README headers that still reference Apache 2.0 to reflect BSL 1.1 (or note the
future Change License), and verify the hard-coded Change Date aligns with
release/roadmap plans before merging; also add a brief checklist or approval
comment within the PR describing who approved the license change and confirming
metadata/README updates.
♻️ Duplicate comments (3)
cortex/hardware_detection.py (1)
320-322: Normalize_uname()return shape to avoid AttributeError.
platform.uname()exposes.node, not.nodename, so_detect_systemcan fail on platforms withoutos.uname()(e.g., Windows). This was previously flagged; please normalize the return object.🛠️ Proposed fix
+from collections import namedtuple + +_UnameResult = namedtuple("_UnameResult", ["nodename", "release", "machine"]) + def _uname(self): """Return uname-like info with nodename/release/machine attributes.""" uname_fn = getattr(os, "uname", None) if callable(uname_fn): return uname_fn() - return platform.uname() + p = platform.uname() + return _UnameResult(nodename=p.node, release=p.release, machine=p.machine)docs/DASHBOARD_IMPLEMENTATION.md (1)
93-105: Update test path and size note.This section lists
test/test_dashboard.pyand “200+ lines”; the repo usestests/and earlier review noted the line count is lower. Please align to the current file path and size.Suggested update
-test/ +tests/ ├── test_dashboard.py # Test suite (200+ lines) +├── test_dashboard.py # Test suite (~150 lines)cortex/dashboard.py (1)
1659-1698: Race condition: progress state modified without lock in_run_installation.The simulated installation method also modifies progress state without lock protection. While this is marked as simulation/testing code, it should still follow the same thread-safety patterns for consistency.
🔒 Recommended fix
def _run_installation(self) -> None: """Run simulated installation in background thread (for testing)""" - progress = self.installation_progress - package_name = progress.package + with self.state_lock: + package_name = self.installation_progress.package - progress.state = InstallationState.IN_PROGRESS - progress.start_time = time.time() - progress.total_steps = INSTALL_TOTAL_STEPS - progress.libraries = [] + with self.state_lock: + self.installation_progress.state = InstallationState.IN_PROGRESS + self.installation_progress.start_time = time.time() + self.installation_progress.total_steps = INSTALL_TOTAL_STEPS + self.installation_progress.libraries = []
🧹 Nitpick comments (6)
cortex/systemd_helper.py (1)
159-168: Consider adding audit logging for systemd operations.Per coding guidelines, operations should be logged to
~/.cortex/history.dbfor audit purposes. The_run_systemctlmethod executes system commands that may benefit from audit logging, especially for actions beyond read-only status checks.cortex/licensing.py (1)
176-193: Preserve wrapped function metadata and add typing for the decorator.Without
functools.wraps, metadata is lost, and the wrapper remains untyped. AddParamSpec/TypeVarand@wrapsfor clarity and tooling support.♻️ Suggested update
+from collections.abc import Callable +from functools import wraps +from typing import ParamSpec, TypeVar + +P = ParamSpec("P") +R = TypeVar("R") + -def require_feature(feature_name: str): +def require_feature(feature_name: str) -> Callable[[Callable[P, R]], Callable[P, R]]: """Decorator to require a feature for a function. @@ - def decorator(func): - def wrapper(*args, **kwargs): + def decorator(func: Callable[P, R]) -> Callable[P, R]: + `@wraps`(func) + def wrapper(*args: P.args, **kwargs: P.kwargs) -> R: if not check_feature(feature_name): raise FeatureNotAvailableError(feature_name) return func(*args, **kwargs)As per coding guidelines, add type hints for public APIs.
requirements.txt (1)
9-26: Consolidate PyYAML entries to a single constraint.
PyYAMLappears three times (PyYAML>=6.0.0,pyyaml>=6.0.0,PyYAML==6.0.3). This is redundant and risks confusion in dependency tooling. Prefer a single, intentional constraint.♻️ Suggested cleanup
-PyYAML>=6.0.0 -... -pyyaml>=6.0.0 -PyYAML==6.0.3 +PyYAML==6.0.3cortex/dashboard.py (3)
506-533: Minor: TOCTOU window inget_available_models().The method checks
_enabledinside the lock (line 514), releases the lock, then makes the network call (line 518). Another thread could calldisable()(if such a method existed) between the check and the network call.Currently this is low-risk since there's no
disable()method, but the pattern differs fromupdate_models()which keeps the lock for the entire operation.For consistency, consider either:
- Keeping the current pattern but documenting it's acceptable for this use case
- Or caching the result inside the same critical section
1391-1406: Fragile: CLI output parsing relies on hardcoded strings.The command extraction logic depends on specific output format strings like
"Generated commands:"(line 1394) and regex patterns for numbered commands. If the CLI output format changes, this parsing will break silently.Consider:
- Having the CLI return structured data (e.g., a list of commands) instead of parsing text output
- Or defining these patterns as constants with clear documentation of the expected format
- Or adding validation/fallback when parsing fails
# Example: Define patterns as constants for easier maintenance CLI_COMMANDS_HEADER = "Generated commands:" CLI_COMMAND_PATTERN = re.compile(r"\s*\d+\.\s*(.+)")
1310-1313: Consider using context manager forio.StringIOand importing at module level.The imports of
contextlib,io, andreare done inside methods rather than at module level. While this works, it's unconventional and adds overhead on each call.Per coding guidelines, imports should be at the top of the file. The
from cortex.cli import CortexCLIimport inside methods is acceptable to avoid circular imports (as noted in past reviews), butcontextlib,io, andreare standard library modules that can safely be imported at module level.📦 Suggested import consolidation
Add to the imports section at the top of the file:
import contextlib import io import reThen remove the inline imports from
_execute_dry_run,_execute_confirmed_install, and_execute_cli_install.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
.github/scripts/cla_check.py.github/workflows/autofix.yml.github/workflows/automation.yml.github/workflows/ci.yml.github/workflows/stale.yml.gitignoreLICENSEcortex/benchmark.pycortex/branding.pycortex/cli.pycortex/config_manager.pycortex/dashboard.pycortex/gpu_manager.pycortex/hardware_detection.pycortex/health_score.pycortex/licensing.pycortex/output_formatter.pycortex/printer_setup.pycortex/semver_resolver.pycortex/stdin_handler.pycortex/systemd_helper.pycortex/update_checker.pycortex/updater.pycortex/version_manager.pycortex/wifi_driver.pydocs/DASHBOARD_IMPLEMENTATION.mdpyproject.tomlrequirements-dev.txtrequirements.txttests/test_benchmark.pytests/test_dashboard.pytests/test_gpu_manager.pytests/test_health_score.pytests/test_interpreter.pytests/test_licensing.pytests/test_printer_setup.pytests/test_stdin_handler.pytests/test_systemd_helper.pytests/test_update_checker.pytests/test_updater.pytests/test_wifi_driver.py
💤 Files with no reviewable changes (1)
- tests/test_interpreter.py
✅ Files skipped from review due to trivial changes (6)
- tests/test_update_checker.py
- tests/test_gpu_manager.py
- cortex/stdin_handler.py
- cortex/update_checker.py
- tests/test_health_score.py
- cortex/semver_resolver.py
🚧 Files skipped from review as they are similar to previous changes (5)
- .github/workflows/ci.yml
- .gitignore
- pyproject.toml
- tests/test_dashboard.py
- requirements-dev.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/version_manager.pytests/test_wifi_driver.pycortex/wifi_driver.pytests/test_licensing.pytests/test_systemd_helper.pycortex/licensing.pycortex/output_formatter.pycortex/printer_setup.pytests/test_benchmark.pycortex/hardware_detection.pycortex/updater.pytests/test_printer_setup.pycortex/branding.pycortex/health_score.pytests/test_updater.pycortex/systemd_helper.pycortex/benchmark.pycortex/gpu_manager.pycortex/config_manager.pycortex/dashboard.pytests/test_stdin_handler.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/test_wifi_driver.pytests/test_licensing.pytests/test_systemd_helper.pytests/test_benchmark.pytests/test_printer_setup.pytests/test_updater.pytests/test_stdin_handler.py
🧬 Code graph analysis (13)
cortex/wifi_driver.py (3)
cortex/gpu_manager.py (1)
_run_command(128-136)cortex/health_score.py (1)
_run_command(142-157)cortex/printer_setup.py (1)
_run_command(103-111)
tests/test_licensing.py (1)
cortex/licensing.py (14)
FeatureNotAvailableError(333-342)FeatureTier(17-32)LicenseInfo(76-106)_get_hostname(326-330)activate_license(263-317)check_feature(150-173)get_license_info(112-142)get_license_tier(145-147)require_feature(176-194)show_license_status(223-260)show_upgrade_prompt(197-220)level(25-32)is_expired(94-98)days_remaining(101-106)
tests/test_systemd_helper.py (1)
cortex/systemd_helper.py (2)
ServiceConfig(101-120)_run_systemctl(159-168)
cortex/printer_setup.py (3)
cortex/wifi_driver.py (4)
ConnectionType(31-36)DeviceType(23-28)_detect_vendor(210-225)_run_command(193-208)cortex/health_score.py (1)
_run_command(142-157)cortex/cli.py (1)
printer(1834-1838)
tests/test_benchmark.py (2)
cortex/benchmark.py (3)
BenchmarkReport(54-77)BenchmarkResult(43-50)_detect_nvidia_gpu(158-169)cortex/gpu_manager.py (1)
_detect_nvidia_gpu(195-225)
cortex/updater.py (1)
cortex/version_manager.py (3)
SemanticVersion(27-145)UpdateChannel(19-22)get_version_string(153-155)
cortex/health_score.py (3)
cortex/gpu_manager.py (1)
_run_command(128-136)cortex/printer_setup.py (1)
_run_command(103-111)cortex/wifi_driver.py (1)
_run_command(193-208)
tests/test_updater.py (1)
cortex/updater.py (1)
Updater(71-455)
cortex/benchmark.py (2)
cortex/branding.py (2)
cx_header(95-101)cx_print(62-82)cortex/health_score.py (1)
overall_score(91-101)
cortex/gpu_manager.py (3)
cortex/health_score.py (1)
_run_command(142-157)cortex/printer_setup.py (1)
_run_command(103-111)cortex/wifi_driver.py (1)
_run_command(193-208)
cortex/dashboard.py (2)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)
tests/test_stdin_handler.py (1)
cortex/stdin_handler.py (1)
StdinData(37-50)
cortex/cli.py (1)
cortex/dashboard.py (3)
DashboardApp(1822-1868)run(1733-1814)run(1847-1868)
🪛 GitHub Actions: CI
cortex/dashboard.py
[error] 37-37: ModuleNotFoundError: No module named 'psutil'. The tests require psutil to run.
[error] 41-41: SystemExit: 1 during import/setup (caused by missing psutil).
🪛 GitHub Actions: CLA Check
.github/scripts/cla_check.py
[error] 1-1: CLA check failed: 1 contributor(s) have not signed the CLA.
🪛 GitHub Actions: Cortex Automation
cortex/dashboard.py
[error] 37-37: The 'psutil' library is required but not installed. Please install it with: pip install psutil>=5.0.0
[error] 41-41: SystemExit: 1 raised during module import due to missing dependencies (psutil).
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (73)
cortex/branding.py (1)
317-321: LGTM — formatting-only change is safe.
The multi-lineconsole.printcall preserves output and readability without behavior changes.cortex/systemd_helper.py (7)
63-86: LGTM - Improved readability for failure solutions.The multi-line formatting for tuple entries in
FAILURE_SOLUTIONSimproves readability without changing behavior.
159-168: LGTM - Proper timeout and error handling for subprocess calls.The 30-second timeout prevents indefinite blocking, and exceptions are properly caught with informative error messages returned.
170-181: LGTM - Consistent timeout handling.The timeout parameter ensures
journalctlcalls don't hang indefinitely.
256-259: LGTM - Improved readability for return statement.
570-577: LGTM - Clean Panel formatting.
586-593: LGTM - Consistent formatting with other Panel calls.
607-631: LGTM - Well-documented entry point with proper type hints.The function has comprehensive docstrings and type annotations per coding guidelines.
tests/test_systemd_helper.py (7)
7-20: LGTM - Clean imports with proper constant exposure validation.The import of
FAILURE_SOLUTIONS,SERVICE_STATE_EXPLANATIONS, andSUB_STATE_EXPLANATIONSvalidates that these constants are properly exported from the module's public API.
26-50: LGTM - Good coverage for ServiceConfig dataclass.The tests verify both default values and custom configuration, ensuring the dataclass behaves as expected.
133-149: LGTM - Clean mock setup for systemctl tests.The inline
MagicMockconstruction improves readability. The localsubprocessimport intest_run_systemctl_timeoutis acceptable since it's only needed forTimeoutExpired.
152-196: LGTM - Thorough status parsing tests.Good coverage of edge cases including non-existent services, output parsing, and automatic
.servicesuffix handling.
260-308: LGTM - Diagnosis tests cover log analysis.The tests verify that log content is properly analyzed for common issues like permission denied errors.
382-428: LGTM - Unit file generation tests are comprehensive.Tests cover description-based creation, custom names, user specification, and automatic name generation with proper sanitization.
464-506: LGTM - Entry point tests cover all action branches.The tests verify
status,diagnose,deps, and unknown action handling with appropriate return codes.tests/test_stdin_handler.py (4)
9-9: LGTM!Import reordering is a minor formatting improvement with no functional impact.
190-191: LGTM!String literal formatting changes are consistent with the rest of the file.
Also applies to: 262-262
340-340: LGTM!Inline
StdinDatainstantiation formatting improves readability.Also applies to: 358-358, 378-378, 398-398
477-481: Good modernization:IOError→OSError.In Python 3,
IOErroris an alias forOSError, so usingOSErrordirectly is more explicit and aligns with modern Python conventions. The test correctly verifies that read errors are handled gracefully by returning empty data.cortex/version_manager.py (1)
17-23: LGTM! Good expansion ofUpdateChannelenum.The addition of
BETAandDEVchannels aligns with the existingSemanticVersion.channelproperty logic (lines 137-145) which already maps prerelease strings like "beta" and "alpha"/"dev" to the appropriate channels. This enables more granular update channel selection.cortex/updater.py (1)
24-24: LGTM! Good import cleanup.Removing
get_current_versionfrom imports is correct — the file usesget_version_string()(lines 126, 267, 423) which returns the version as a string directly, which is the appropriate function for this module's needs.tests/test_updater.py (1)
15-22: LGTM!Adding
Updaterto the explicit import list is correct. The class is used throughout the test file for comprehensive testing of update operations, backup management, and progress callbacks..github/scripts/cla_check.py (1)
339-352: LGTM! Valid bot allowlist addition.Adding
"autofix-ci"to the bot patterns is appropriate given the new autofix workflow introduced in this PR. This ensures automated code formatting commits from the autofix-ci bot bypass CLA verification.Note: The pipeline failure ("CLA check failed: 1 contributor(s) have not signed the CLA") is unrelated to this code change — it indicates the PR author needs to sign the CLA, not an issue with this script.
cortex/licensing.py (3)
202-220: Upgrade prompt rendering looks good.
233-240: License status header formatting is clear.
326-330: Hostname helper change is fine.cortex/gpu_manager.py (4)
128-136: Timeout handling improves resilience.
349-353: Fallback message is clear.
444-457: Panel rendering update looks fine.
525-536: Entrypoint signature and typing look good.cortex/output_formatter.py (1)
10-17: Import cleanup looks safe.cortex/printer_setup.py (10)
9-13: Import update OK.
103-111: Timeout handling in command execution is solid.
156-164: USB device parsing/append looks fine.
185-193: Network printer detection append looks fine.
220-239: Configured-printer state mapping is clear.
265-274: Scanner detection append looks good.
414-417: Test print invocation looks OK.
467-475: Status color selection is clear.
486-490: Device icon mapping looks fine.
540-547: Setup panel rendering looks good.tests/test_printer_setup.py (2)
7-13: LGTM — test imports align with new public constants.
168-172: LGTM — clearer tuple expectation formatting..github/workflows/stale.yml (1)
1-37: LGTM — PR-only stale policy with clear exemptions.cortex/health_score.py (3)
142-157: LGTM — formatting-only adjustment.
306-307: LGTM — security package check remains consistent.
580-580: LGTM — formatting-only adjustment.cortex/config_manager.py (3)
77-80: LGTM — clarity improvement without behavioral change.
332-336: LGTM — explicit typing improves readability.
464-467: LGTM — sensible handling for versionless packages.cortex/wifi_driver.py (4)
193-208: LGTM — formatting-only adjustment.
253-255: LGTM — no behavioral change.
444-458: LGTM — clearer status formatting.
602-604: LGTM — output formatting only.tests/test_wifi_driver.py (1)
7-13: Import surface update looks fine.cortex/benchmark.py (3)
18-25: Formatting-only updates look good.Also applies to: 118-179
225-338: Benchmark helper tweaks are non-functional and look fine.Also applies to: 340-427
590-592: Report assembly/display formatting is fine.Also applies to: 645-667
tests/test_benchmark.py (1)
11-19: Test import/formatting updates look good.Also applies to: 29-40, 58-59, 239-241, 287-294
.github/workflows/automation.yml (2)
17-37: CI setup/cache adjustments look good.Also applies to: 50-55
61-84: Lint/security workflow restructuring looks fine.Also applies to: 90-108
tests/test_licensing.py (1)
1-431: Solid, comprehensive licensing coverage.This suite exercises tiers, caching, expiry logic, activation, and upgrade prompts end‑to‑end. Looks good.
cortex/hardware_detection.py (2)
402-458: GPU parsing refactor looks clean.Nice consolidation of per‑line
lspciparsing into_parse_lspci_gpu_line, and the fallback in_extract_gpu_modelkeeps detection robust.
551-650: LGTM on the defensive guards in network/virtualization/quick paths.The additional exception guards keep partial failures from breaking overall detection.
cortex/dashboard.py (9)
73-121: LGTM - Constants are well-organized.All magic numbers have been extracted to named constants with clear, descriptive names and inline documentation. This addresses the previous feedback about magic numbers throughout the codebase.
163-168: LGTM - ACTION_MAP is properly typed and documented.The type annotation
dict[str, tuple[str, ActionType, str]]has been added as requested in previous reviews. The centralized action mapping provides a single source of truth for key bindings.
193-221: LGTM - InstallationProgress now computes estimated_remaining.The
update_elapsed()method now properly calculatesestimated_remainingbased on per-step time, addressing the previous feedback about the unused field.
240-345: LGTM - SystemMonitor has proper resource management.Key improvements from previous feedback:
- GPU shutdown method added for cleanup
- Non-blocking CPU calls after initialization
- Deferred GPU initialization for privacy
- Thread-safe with lock protection
352-432: LGTM - ProcessLister is privacy-conscious.The implementation explicitly avoids collecting command-line arguments for privacy, only collecting PID and process name. The explicit
enable()pattern ensures no automatic data collection.
562-590: LGTM - History loading is now thread-safe.The
load_history()method now properly acquires the lock before modifyingself.history, addressing the previous race condition concern. File reading is efficiently done outside the lock.
1733-1814: LGTM - Run method has proper terminal and resource management.The
run()method correctly:
- Saves/restores terminal settings with atexit for safety
- Uses state_lock for progress updates in the monitor loop (line 1773)
- Properly cleans up in the finally block
- Handles KeyboardInterrupt gracefully
1822-1868: LGTM - DashboardApp has proper lifecycle management.The class correctly:
- Documents its purpose with example usage
- Coordinates all dashboard components
- Calls
shutdown_gpu()in the finally block for resource cleanup- Returns appropriate exit codes
36-41: No action needed: psutil is already in dependencies.The
psutillibrary is already present inrequirements.txt(version>=5.9.0), andrequirements-dev.txtinherits all production dependencies via-r requirements.txt. Therefore,psutilis available in both production and test environments. The code's graceful error handling for the missing import is appropriate defensive programming, but the pipeline failure is not caused by a missing dependency.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| def dashboard(self) -> int: | ||
| """Launch the real-time system monitoring dashboard""" | ||
| try: | ||
| from cortex.dashboard import DashboardApp | ||
|
|
||
| app = DashboardApp() | ||
| app.run() | ||
| return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Propagate DashboardApp exit code.
DashboardApp.run() returns an int; currently it’s ignored and dashboard() always returns 0 on the success path, masking non‑exception failures.
🐛 Proposed fix
- app = DashboardApp()
- app.run()
- return 0
+ app = DashboardApp()
+ return app.run()🤖 Prompt for AI Agents
In `@cortex/cli.py` around lines 2830 - 2837, The dashboard() method currently
ignores the integer returned by DashboardApp.run() and always returns 0 on the
success path; update cortex.cli.dashboard to capture the result of
DashboardApp.run() (e.g., rc = app.run()) and return that rc so the caller
receives the actual exit code from DashboardApp.run(); keep the existing
exception handling behavior unchanged and only replace the fixed "return 0" with
returning the run() result.
- Changed psutil import to set PSUTIL_AVAILABLE flag instead of exiting - Added psutil availability checks in main() and run() methods - Prevents pytest collection failure when psutil is not installed - Allows tests to run and collect successfully even without psutil - Only fails at runtime when dashboard is actually executed without psutil
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@cortex/config_manager.py`:
- Around line 859-933: The _log_install_audit function currently swallows all
exceptions; change the except block to log a warning with the exception details
(including the audit_db_path and package_name/version) instead of passing
silently. Use an existing logger (e.g., self.logger.warning(...)) or
logging.getLogger("cortex").warning(...) to emit a message like "Failed to write
install audit for {package_name} {version} to {audit_db_path}: {e}" and include
the exception string; do not re-raise so installs still succeed, but ensure
failures are observable in logs.
In `@cortex/dashboard.py`:
- Around line 1217-1685: Dashboard calls to CortexCLI.install (invoked from
_execute_dry_run, _execute_confirmed_install, and _execute_cli_install) run
un-sandboxed and aren't audit-logged; wrap or modify CortexCLI.install to run
installs under Firejail and record each install attempt into
~/.cortex/history.db (timestamp, user, package, commands run, dry_run/execute
flags, result code, truncated stdout/stderr, and whether Firejail was used).
Implement the sandboxing so when executing commands (real execute=True) they are
prefixed with a Firejail invocation (fall back gracefully if firejail is
unavailable), ensure CLI dry-run still returns the planned commands unmodified,
and add a small, thread-safe SQLite logger (use a single DB file
~/.cortex/history.db with a simple installs table) that is called after each
install attempt (both success and failure) from the wrapper or inside
CortexCLI.install; ensure code paths in _execute_dry_run,
_execute_confirmed_install, and _execute_cli_install use the new wrapper/updated
CortexCLI.install so all install activity is sandboxed and logged.
- Around line 305-346: pyproject.toml is missing runtime deps psutil and
nvidia-ml-py which causes update_metrics to crash when psutil is unavailable;
add "psutil>=5.9.0" and "nvidia-ml-py>=12.0.0" to the main [project]
dependencies in pyproject.toml, and modify update_metrics in cortex.dashboard.py
to guard against missing imports by checking if the module objects are None or
lack expected attributes (e.g., if psutil is None or not hasattr(psutil,
"cpu_percent") then log a debug/info message and return early), and similarly
ensure pynvml usage is only attempted when pynvml is present (guard the gpu
block with a check like if not pynvml or not hasattr(pynvml,
"nvmlDeviceGetCount") skip GPU metrics and log); keep existing exception
handling but avoid AttributeError by adding these presence checks around
cpu_percent, virtual_memory, and GPU calls.
In `@cortex/gpu_manager.py`:
- Around line 273-293: The first-pass vendor-matching loop (where
state.active_gpu is set) doesn't verify the device is actually powered/active,
so a vendor match can select an inactive GPU and prevent the fallback; update
the vendor match in the GPU selection logic to require device.is_active (or
equivalent power_state check) before assigning state.active_gpu (referencing
state.devices, device.vendor, device.is_active, state.active_gpu, GPUMode and
GPUVendor) so only active devices are considered in the first pass.
In `@cortex/health_score.py`:
- Around line 493-529: Remove the redundant local "from pathlib import Path"
import and rely on the module-level Path import, and broaden the exception
handling around the audit DB block that uses
sqlite3.connect/health_checks/audit_db_path so it catches sqlite3.Error (and
subclasses) in addition to OSError (e.g., except (OSError, sqlite3.Error) as e)
to avoid crashing the health check when DB errors occur; keep the existing
swallow/quiet behavior (or log the exception) and do not let sqlite3 errors
propagate out of the audit logging code.
In `@tests/test_dashboard.py`:
- Around line 20-70: Import PSUTIL_AVAILABLE from cortex.dashboard and modify
the test_update_metrics_when_enabled method in TestSystemMonitor to skip when
PSUTIL_AVAILABLE is False; for example, use
unittest.skipUnless(PSUTIL_AVAILABLE, ...) or an early self.skipTest call at the
top of test_update_metrics_when_enabled so update_metrics() is only exercised
when psutil is available.
♻️ Duplicate comments (2)
cortex/config_manager.py (1)
815-837: Direct installs still bypass the new guard + consent tracking.
_install_packagestill calls_install_directwhensandbox_executorisNone, so the env‑var gate in_install_with_sandboxwon’t be hit on the main path. This allows direct installs without explicit user confirmation and logsescalation_consent=Trueunconditionally. Please route all installs through_install_with_sandbox(or add an explicit confirmation flag) and record actual consent before proceeding. As per coding guidelines.Also applies to: 947-956
cortex/dashboard.py (1)
1686-1725:_run_installationmutates shared state without the lock.This background thread updates
InstallationProgressfields directly, racing the UI and monitor threads. It should follow the same lock discipline as the other background operations.🔒 Suggested fix (pattern)
- progress = self.installation_progress - package_name = progress.package - - progress.state = InstallationState.IN_PROGRESS - progress.start_time = time.time() - progress.total_steps = INSTALL_TOTAL_STEPS - progress.libraries = [] + with self.state_lock: + progress = self.installation_progress + package_name = progress.package + progress.state = InstallationState.IN_PROGRESS + progress.start_time = time.time() + progress.total_steps = INSTALL_TOTAL_STEPS + progress.libraries = [] for i, step in enumerate(install_steps, 1): - if ( - self.stop_event.is_set() - or not self.running - or progress.state == InstallationState.FAILED - ): - break - progress.current_step = i - progress.current_library = step - progress.libraries.append(step) - progress.update_elapsed() + with self.state_lock: + if ( + self.stop_event.is_set() + or not self.running + or progress.state == InstallationState.FAILED + ): + break + progress.current_step = i + progress.current_library = step + progress.libraries.append(step) + progress.update_elapsed() time.sleep(INSTALL_STEP_DELAY) - if progress.state != InstallationState.FAILED: - progress.state = InstallationState.COMPLETED - ... - progress.current_library = "" + with self.state_lock: + if progress.state != InstallationState.FAILED: + progress.state = InstallationState.COMPLETED + ... + progress.current_library = ""
🧹 Nitpick comments (1)
cortex/dashboard.py (1)
210-222: Add a return type forInstallationProgress.update_elapsed.Per repo guidelines, public APIs should carry type hints. This method is missing
-> None.♻️ Suggested change
- def update_elapsed(self): + def update_elapsed(self) -> None:As per coding guidelines, please apply this pattern to other public methods that are still missing return annotations.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/config_manager.pycortex/dashboard.pycortex/gpu_manager.pycortex/health_score.pytests/test_dashboard.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/health_score.pycortex/gpu_manager.pytests/test_dashboard.pycortex/dashboard.pycortex/config_manager.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/test_dashboard.py
🧠 Learnings (2)
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Log all operations to ~/.cortex/history.db for audit purposes
Applied to files:
cortex/health_score.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to tests/**/*.py : Maintain test coverage above 80% for pull requests
Applied to files:
tests/test_dashboard.py
🧬 Code graph analysis (5)
cortex/health_score.py (3)
cortex/gpu_manager.py (1)
_run_command(128-136)cortex/printer_setup.py (1)
_run_command(103-111)cortex/wifi_driver.py (1)
_run_command(193-208)
cortex/gpu_manager.py (3)
cortex/health_score.py (1)
_run_command(142-157)cortex/printer_setup.py (1)
_run_command(103-111)cortex/wifi_driver.py (1)
_run_command(193-208)
tests/test_dashboard.py (1)
cortex/dashboard.py (33)
CommandHistory(542-602)DashboardApp(1849-1900)DashboardTab(129-133)InstallationProgress(195-221)InstallationState(136-145)ModelLister(440-534)ProcessLister(353-432)SystemMetrics(178-191)SystemMonitor(241-345)UIRenderer(610-1841)get_metrics(300-303)enable_monitoring(274-276)update_metrics(305-345)get_processes(429-432)enable(392-394)enable(456-458)update_processes(396-427)get_models(502-505)check_ollama(460-470)update_models(472-500)get_history(599-602)add_command(593-597)_render_header(669-683)_render_resources(685-703)_render_processes(705-721)_render_models(723-751)_render_history(753-765)_render_actions(767-782)_render_footer(928-933)_render_screen(935-944)_render_progress_tab(917-926)update_elapsed(210-221)main(1903-1911)
cortex/dashboard.py (5)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)cortex/semantic_cache.py (1)
total(32-34)tests/test_stdin_handler.py (1)
handler(61-63)cortex/hardware_detection.py (1)
total_gb(88-89)
cortex/config_manager.py (3)
cortex/logging_system.py (1)
error(208-210)cortex/coordinator.py (1)
execute(230-279)cortex/sandbox/sandbox_executor.py (1)
execute(501-633)
🪛 GitHub Actions: CI
cortex/dashboard.py
[error] 345-345: Metrics error: 'NoneType' object has no attribute 'cpu_percent'. This caused TestSystemMonitor::test_update_metrics_when_enabled to fail. Command: pytest tests/ -v --tb=short
🪛 GitHub Actions: Cortex Automation
tests/test_dashboard.py
[error] 65-65: TestSystemMonitor::test_update_metrics_when_enabled failed due to Metrics error in Cortex dashboard.
cortex/dashboard.py
[error] 345-345: TestSystemMonitor.test_update_metrics_when_enabled failed. Metrics error: 'NoneType' object has no attribute 'cpu_percent'.
🔇 Additional comments (15)
cortex/health_score.py (4)
142-157: LGTM!The
_run_commandimplementation is consistent with similar methods inwifi_driver.pyand other modules. Proper type hints and comprehensive exception handling are in place.
306-306: LGTM!Formatting cleanup; correctly discards unused stdout/stderr since only the exit code is needed.
476-480: LGTM!The dict comprehension is readable and correctly constructs the factors dictionary.
618-618: LGTM!Consistent with the score color logic used elsewhere in the file (e.g., line 549).
cortex/config_manager.py (3)
77-80: POSIX guard looks good.
Skipping strict ownership enforcement when POSIX APIs are unavailable avoids runtime errors in restricted/non‑POSIX environments.
332-332: Type annotation improves clarity.
The explicitdict[str, Any]annotation makes the config structure easier to reason about.
464-467: Confirm intent for missing/invalid versions.
Treating a missing version as an upgrade could trigger unexpected upgrades if the config intended “keep current.” Please confirm this behavior matches the import contract; if not, consider “already_installed” or an explicit flag.cortex/gpu_manager.py (1)
128-136: Formatting-only tweaks look good.Multi-line command formatting and the Panel print change are clear and keep behavior intact.
Also applies to: 195-215, 356-361, 458-465, 533-533
cortex/dashboard.py (6)
1-170: Centralized constants and ACTION_MAP are a solid improvement.This keeps UI tuning and key bindings easy to reason about.
353-603: Consent-gated collectors with locking look solid.Process, model, and history collectors are consistently guarded and privacy-conscious.
609-1003: UI rendering flow is clean and consistent.The rendering helpers and centralized action handling are cohesive and easy to follow.
1004-1215: Bench/doctor state transitions are well‑locked.Good atomic check‑and‑set usage and background execution structure.
1760-1841: Terminal cleanup and monitor loop handling look solid.The shutdown flow and error handling are robust.
1849-1901: DashboardApp lifecycle and GPU cleanup are well handled.Good to see the explicit NVML shutdown on exit.
tests/test_dashboard.py (1)
1-19: Test breadth looks strong for the new dashboard surface.Good coverage across collectors, rendering, and constants.
Also applies to: 71-366
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| def _log_install_audit( | ||
| self, | ||
| package_name: str, | ||
| version: str | None, | ||
| source: str, | ||
| is_dry_run: bool, | ||
| is_sandboxed: bool, | ||
| is_direct: bool, | ||
| escalation_consent: bool, | ||
| error: str | None = None, | ||
| ) -> None: | ||
| """ | ||
| Log install attempt to audit database. | ||
| Args: | ||
| package_name: Package name | ||
| version: Package version | ||
| source: Package source | ||
| is_dry_run: Whether this was a dry-run | ||
| is_sandboxed: Whether sandboxed install was used | ||
| is_direct: Whether direct install was used | ||
| escalation_consent: Whether user consented to privilege escalation | ||
| error: Error message if any | ||
| """ | ||
| try: | ||
| import sqlite3 | ||
| from datetime import datetime | ||
|
|
||
| # Use ~/.cortex/history.db for audit logging | ||
| audit_db_path = Path.home() / ".cortex" / "history.db" | ||
| audit_db_path.parent.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| with sqlite3.connect(str(audit_db_path)) as conn: | ||
| cursor = conn.cursor() | ||
|
|
||
| # Create audit table if it doesn't exist | ||
| cursor.execute( | ||
| """ | ||
| CREATE TABLE IF NOT EXISTS install_audit ( | ||
| timestamp TEXT NOT NULL, | ||
| package_name TEXT NOT NULL, | ||
| version TEXT, | ||
| source TEXT NOT NULL, | ||
| is_dry_run INTEGER NOT NULL, | ||
| is_sandboxed INTEGER NOT NULL, | ||
| is_direct INTEGER NOT NULL, | ||
| escalation_consent INTEGER NOT NULL, | ||
| error TEXT | ||
| ) | ||
| """ | ||
| ) | ||
|
|
||
| # Insert audit record | ||
| cursor.execute( | ||
| """ | ||
| INSERT INTO install_audit VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| """, | ||
| ( | ||
| datetime.now().isoformat(), | ||
| package_name, | ||
| version, | ||
| source, | ||
| 1 if is_dry_run else 0, | ||
| 1 if is_sandboxed else 0, | ||
| 1 if is_direct else 0, | ||
| 1 if escalation_consent else 0, | ||
| error, | ||
| ), | ||
| ) | ||
|
|
||
| conn.commit() | ||
| except Exception as e: | ||
| # Don't fail the install if audit logging fails | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit logging should not fail silently.
Swallowing exceptions can drop audit records (e.g., DB locked), undermining the “log all operations” requirement. At minimum, emit a warning when audit writes fail. As per coding guidelines.
🛠️ Suggested change
- except Exception as e:
- # Don't fail the install if audit logging fails
- pass
+ except Exception as e:
+ import logging
+ logging.getLogger(__name__).warning(
+ "Install audit logging failed; operation may be untracked: %s",
+ e,
+ )
+ # Don't fail the install if audit logging fails🤖 Prompt for AI Agents
In `@cortex/config_manager.py` around lines 859 - 933, The _log_install_audit
function currently swallows all exceptions; change the except block to log a
warning with the exception details (including the audit_db_path and
package_name/version) instead of passing silently. Use an existing logger (e.g.,
self.logger.warning(...)) or logging.getLogger("cortex").warning(...) to emit a
message like "Failed to write install audit for {package_name} {version} to
{audit_db_path}: {e}" and include the exception string; do not re-raise so
installs still succeed, but ensure failures are observable in logs.
| def update_metrics(self) -> None: | ||
| """Update all metrics. Only collects data if monitoring is enabled.""" | ||
| if not self._monitoring_enabled: | ||
| return | ||
|
|
||
| try: | ||
| # Use non-blocking CPU calls after first initialization | ||
| if not self._cpu_initialized: | ||
| psutil.cpu_percent(interval=CPU_SAMPLE_INTERVAL) | ||
| self._cpu_initialized = True | ||
| cpu_percent = psutil.cpu_percent(interval=None) | ||
|
|
||
| vm = psutil.virtual_memory() | ||
|
|
||
| gpu_percent = None | ||
| gpu_memory_percent = None | ||
|
|
||
| if self.gpu_initialized: | ||
| try: | ||
| device_count = pynvml.nvmlDeviceGetCount() | ||
| if device_count > 0: | ||
| handle = pynvml.nvmlDeviceGetHandleByIndex(0) | ||
| gpu_percent = pynvml.nvmlDeviceGetUtilizationRates(handle).gpu | ||
| mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle) | ||
| gpu_memory_percent = (mem_info.used / mem_info.total) * 100 | ||
| except Exception as e: | ||
| logger.debug(f"GPU metrics error: {e}") | ||
|
|
||
| metrics = SystemMetrics( | ||
| cpu_percent=cpu_percent, | ||
| ram_percent=vm.percent, | ||
| ram_used_gb=vm.used / BYTES_PER_GB, | ||
| ram_total_gb=vm.total / BYTES_PER_GB, | ||
| gpu_percent=gpu_percent, | ||
| gpu_memory_percent=gpu_memory_percent, | ||
| ) | ||
|
|
||
| with self.lock: | ||
| self.current_metrics = metrics | ||
| except Exception as e: | ||
| logger.error(f"Metrics error: {e}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the cortex/dashboard.py file to understand the imports and context
head -100 cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 3100
🏁 Script executed:
# Search for PSUTIL_AVAILABLE in the codebase
rg "PSUTIL_AVAILABLE" --type pyRepository: cortexlinux/cortex
Length of output: 260
🏁 Script executed:
# Look for how psutil is imported across the codebase
rg "import psutil" --type py -A 2 -B 2Repository: cortexlinux/cortex
Length of output: 444
🏁 Script executed:
# Check CI configuration files
fd -e yml -e yaml -e ini | grep -E "(workflow|circleci|tox|pytest)" | head -20Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Look at the update_metrics method specifically (lines 305-346)
sed -n '305,346p' cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 1712
🏁 Script executed:
# Look for where PSUTIL_AVAILABLE is used as a guard
rg "PSUTIL_AVAILABLE" --type py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 1060
🏁 Script executed:
# Search for CI configuration files more broadly
find . -type f \( -name "*.yml" -o -name "*.yaml" -o -name ".github" -o -name "Makefile" \) | head -20Repository: cortexlinux/cortex
Length of output: 554
🏁 Script executed:
# Look for pytest.ini, setup.py, pyproject.toml or requirements files that specify dependencies
fd -e txt -e toml -e cfg -e ini | grep -E "(requirements|setup|pyproject|pytest)" | head -20Repository: cortexlinux/cortex
Length of output: 114
🏁 Script executed:
# Check requirements.txt for psutil
cat requirements.txtRepository: cortexlinux/cortex
Length of output: 544
🏁 Script executed:
# Check CI pipeline configuration
cat .github/workflows/ci.ymlRepository: cortexlinux/cortex
Length of output: 4037
🏁 Script executed:
# Search for any tests that might invoke update_metrics
rg "update_metrics" --type py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 2890
🏁 Script executed:
# Look for how dashboard is tested or if there are specific test fixtures
fd -e py | xargs grep -l "Dashboard\|dashboard" | grep testRepository: cortexlinux/cortex
Length of output: 85
🏁 Script executed:
# Check pyproject.toml to see how dependencies are defined
cat pyproject.tomlRepository: cortexlinux/cortex
Length of output: 5735
🏁 Script executed:
# Look at the test file to understand how update_metrics is tested
head -100 tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 3189
🏁 Script executed:
# Search for any conditional imports or dynamic dependency handling
rg "PSUTIL_AVAILABLE\|psutil\s*=" --type py -B 3 -A 3Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Let's see the full test file to understand how psutil is handled
wc -l tests/test_dashboard.py && head -150 tests/test_dashboard.py | tail -80Repository: cortexlinux/cortex
Length of output: 2962
🏁 Script executed:
# Check if psutil is mocked in tests or if there's a conftest fixture
fd -name "conftest.py" | xargs cat 2>/dev/null || echo "No conftest.py found"Repository: cortexlinux/cortex
Length of output: 292
🏁 Script executed:
# Double-check pyproject.toml for psutil in all sections
rg "psutil" pyproject.tomlRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Look at the actual error from the review - search for any CI logs or test output
rg "NoneType.*cpu_percent" --type py --type txtRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Let me check the exact dependencies section in pyproject.toml more carefully
sed -n '/dependencies = \[/,/\]/p' pyproject.tomlRepository: cortexlinux/cortex
Length of output: 570
🏁 Script executed:
# Check if there's a setup.py that might have different dependencies
ls -la setup.py 2>/dev/null || echo "No setup.py found"Repository: cortexlinux/cortex
Length of output: 79
🏁 Script executed:
# Look for any environment files or additional config that might install psutil
find . -name "*.txt" -o -name "*.cfg" -o -name "*.ini" | xargs grep -l "psutil" 2>/dev/nullRepository: cortexlinux/cortex
Length of output: 80
🏁 Script executed:
# Let's check how the tests actually run - look at conftest or test imports
head -50 tests/test_dashboard.py | grep -A 20 "^import\|^from"Repository: cortexlinux/cortex
Length of output: 622
🏁 Script executed:
# Look for any documentation or install instructions that explain the dependency setup
find . -name "README*" -o -name "INSTALL*" -o -name "SETUP*" | xargs grep -l "psutil\|requirement" 2>/dev/null | head -5Repository: cortexlinux/cortex
Length of output: 97
🏁 Script executed:
# Verify that tests would actually fail if psutil is not installed
# by checking what happens in test_update_metrics_when_enabled without psutil
grep -A 10 "test_update_metrics_when_enabled" tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 576
🏁 Script executed:
# Check if there's any pytest fixture or mock that handles psutil
grep -r "@patch\|@mock\|psutil" tests/test_dashboard.py | head -20Repository: cortexlinux/cortex
Length of output: 193
🏁 Script executed:
# Let's check if perhaps the install command in CI uses requirements.txt
grep -A 20 "Install dependencies" .github/workflows/ci.ymlRepository: cortexlinux/cortex
Length of output: 705
🏁 Script executed:
# Check if update_metrics is called anywhere else that might not have the guard
rg "update_metrics" --type py -B 5 -A 2 | grep -v "def update_metrics"Repository: cortexlinux/cortex
Length of output: 3925
🏁 Script executed:
# Verify psutil is not in any optional dependencies in pyproject.toml
grep -A 30 "optional-dependencies" pyproject.tomlRepository: cortexlinux/cortex
Length of output: 626
🏁 Script executed:
# Let's trace through the actual logic: when would update_metrics be called without PSUTIL_AVAILABLE check?
grep -B 10 "update_metrics()" cortex/dashboard.py | head -30Repository: cortexlinux/cortex
Length of output: 1245
🏁 Script executed:
# Let me check if perhaps requirements.txt is also used somewhere in CI
grep -r "requirements.txt" .github/ 2>/dev/null || echo "No requirements.txt usage in CI"Repository: cortexlinux/cortex
Length of output: 93
🏁 Script executed:
# Check if there's an extras_require or install_requires that might include psutil indirectly
grep -r "install_requires\|extras_require" . --include="*.py" --include="*.cfg" --include="*.toml" 2>/dev/null | head -20Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Let me verify the exact issue: check what would happen if we try to import and use psutil when it's None
python3 << 'EOF'
# Simulate the import pattern from dashboard.py
try:
import nonexistent_module as psutil
PSUTIL_AVAILABLE = True
except ImportError:
PSUTIL_AVAILABLE = False
psutil = None
# Now try to call a method on psutil
if psutil is None:
try:
result = psutil.cpu_percent(interval=None)
except AttributeError as e:
print(f"Error: {e}")
print(f"Error type: {type(e).__name__}")
EOFRepository: cortexlinux/cortex
Length of output: 148
🏁 Script executed:
# Look at the exact error message from the review and search for it in logs
rg "NoneType" --type py | head -10Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Let me check the git history to understand why psutil is in requirements.txt but not pyproject.toml
git log --oneline -20 pyproject.toml 2>/dev/null | head -10Repository: cortexlinux/cortex
Length of output: 104
🏁 Script executed:
# Check if maybe dashboard features are optional and psutil should be in optional dependencies
grep -B 5 -A 5 "dashboard\|system.monitoring\|optional" pyproject.toml | head -40Repository: cortexlinux/cortex
Length of output: 302
🏁 Script executed:
# Let me trace the actual scenario: when a user runs cortex, what happens if psutil is not installed?
# First, check the CLI entry point
grep -A 20 "def main" cortex/cli.py 2>/dev/null | head -30Repository: cortexlinux/cortex
Length of output: 976
🏁 Script executed:
# Check if there's a way psutil could be conditionally imported or if there's documentation
rg "psutil.*optional\|optional.*psutil" --type py --type mdRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Let me check if psutil is mocked in the test through a conftest or fixture
find tests -name "*.py" -exec grep -l "mock\|fixture\|psutil" {} \;Repository: cortexlinux/cortex
Length of output: 1253
🏁 Script executed:
# Check if the test would actually run with psutil
grep -B 20 "class TestSystemMonitor" tests/test_dashboard.py | grep -E "mock|patch|@|import"Repository: cortexlinux/cortex
Length of output: 92
🏁 Script executed:
# Verify one more time: does the current update_metrics have ANY guard?
sed -n '305,325p' cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 807
🏁 Script executed:
# Check the full context around line 305 to understand the try-except
sed -n '300,350p' cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 2047
Add guard for psutil unavailability and move to pyproject.toml.
The psutil library is listed in requirements.txt but missing from pyproject.toml dependencies. CI installs via pip install -e ".[dev]" which uses only pyproject.toml, leaving psutil unavailable. The update_metrics() method then crashes with AttributeError: 'NoneType' object has no attribute 'cpu_percent'.
Two fixes needed:
-
Add
psutil>=5.9.0andnvidia-ml-py>=12.0.0to the main dependencies inpyproject.toml(these are required for the dashboard feature, not optional). -
Guard
update_metrics()againstpsutilbeingNone:
Guard for update_metrics
def update_metrics(self) -> None:
"""Update all metrics. Only collects data if monitoring is enabled."""
if not self._monitoring_enabled:
return
+ if not PSUTIL_AVAILABLE or psutil is None:
+ logger.error("psutil not available; skipping metrics update")
+ return🧰 Tools
🪛 GitHub Actions: CI
[error] 345-345: Metrics error: 'NoneType' object has no attribute 'cpu_percent'. This caused TestSystemMonitor::test_update_metrics_when_enabled to fail. Command: pytest tests/ -v --tb=short
🪛 GitHub Actions: Cortex Automation
[error] 345-345: TestSystemMonitor.test_update_metrics_when_enabled failed. Metrics error: 'NoneType' object has no attribute 'cpu_percent'.
🤖 Prompt for AI Agents
In `@cortex/dashboard.py` around lines 305 - 346, pyproject.toml is missing
runtime deps psutil and nvidia-ml-py which causes update_metrics to crash when
psutil is unavailable; add "psutil>=5.9.0" and "nvidia-ml-py>=12.0.0" to the
main [project] dependencies in pyproject.toml, and modify update_metrics in
cortex.dashboard.py to guard against missing imports by checking if the module
objects are None or lack expected attributes (e.g., if psutil is None or not
hasattr(psutil, "cpu_percent") then log a debug/info message and return early),
and similarly ensure pynvml usage is only attempted when pynvml is present
(guard the gpu block with a check like if not pynvml or not hasattr(pynvml,
"nvmlDeviceGetCount") skip GPU metrics and log); keep existing exception
handling but avoid AttributeError by adding these presence checks around
cpu_percent, virtual_memory, and GPU calls.
| def _cancel_operation(self) -> None: | ||
| """Cancel any ongoing operation""" | ||
| with self.state_lock: | ||
| # Cancel installation | ||
| if self.installation_progress.state in [ | ||
| InstallationState.IN_PROGRESS, | ||
| InstallationState.PROCESSING, | ||
| InstallationState.WAITING_INPUT, | ||
| InstallationState.WAITING_CONFIRMATION, | ||
| ]: | ||
| self.installation_progress.state = InstallationState.FAILED | ||
| self.installation_progress.error_message = "Operation cancelled by user" | ||
| self.installation_progress.current_library = "" | ||
| # Clear pending commands | ||
| if hasattr(self, "_pending_commands"): | ||
| self._pending_commands = [] | ||
|
|
||
| # Cancel bench | ||
| if self.bench_running: | ||
| self.bench_running = False | ||
| self.bench_status = "Benchmark cancelled" | ||
|
|
||
| # Cancel doctor | ||
| if self.doctor_running: | ||
| self.doctor_running = False | ||
|
|
||
| # Reset input | ||
| self.input_active = False | ||
| self.input_text = "" | ||
|
|
||
| # Signal stop to threads | ||
| self.stop_event.set() | ||
|
|
||
| self.status_message = "Operation cancelled" | ||
|
|
||
| def _start_installation(self) -> None: | ||
| """Start installation process""" | ||
| with self.state_lock: | ||
| # Atomic check-and-set: verify conditions and update state atomically | ||
| if self.installation_progress.state in [ | ||
| InstallationState.IN_PROGRESS, | ||
| InstallationState.PROCESSING, | ||
| InstallationState.WAITING_INPUT, | ||
| InstallationState.WAITING_CONFIRMATION, | ||
| ]: | ||
| return | ||
|
|
||
| # Atomically set state before releasing lock | ||
| # Reset progress state for new installation | ||
| self.installation_progress = InstallationProgress() | ||
| self.installation_progress.state = InstallationState.WAITING_INPUT | ||
|
|
||
| # User explicitly requested install - enable monitoring | ||
| self._enable_monitoring() | ||
|
|
||
| self.input_active = True | ||
| self.input_text = "" | ||
| self._pending_commands = [] # Clear any pending commands | ||
| self.current_tab = DashboardTab.PROGRESS | ||
| self.doctor_results = [] | ||
| self.stop_event.clear() | ||
|
|
||
| def _submit_installation_input(self) -> None: | ||
| """Submit installation input""" | ||
| with self.state_lock: | ||
| package = self.input_text.strip() | ||
| if not package: | ||
| return | ||
|
|
||
| self.installation_progress.package = package | ||
| self.installation_progress.state = InstallationState.PROCESSING | ||
| self.input_active = False | ||
|
|
||
| if SIMULATION_MODE: | ||
| # TODO: Replace with actual CLI integration | ||
| # This simulation will be replaced with: | ||
| # from cortex.cli import CortexCLI | ||
| # cli = CortexCLI() | ||
| # cli.install(package, dry_run=False) | ||
| self._simulate_installation() | ||
| else: | ||
| # Run dry-run first to get commands, then show confirmation | ||
| self._run_dry_run_and_confirm() | ||
|
|
||
| def _run_dry_run_and_confirm(self) -> None: | ||
| """ | ||
| Run dry-run to get commands, then show confirmation dialog. | ||
| Executes in background thread with progress feedback. | ||
| """ | ||
| self.stop_event.clear() | ||
| threading.Thread(target=self._execute_dry_run, daemon=True).start() | ||
|
|
||
| def _execute_dry_run(self) -> None: | ||
| """Execute dry-run to get commands, then show confirmation""" | ||
| import contextlib | ||
| import io | ||
|
|
||
| from cortex.cli import CortexCLI | ||
|
|
||
| with self.state_lock: | ||
| progress = self.installation_progress | ||
| package_name = progress.package | ||
| progress.state = InstallationState.IN_PROGRESS | ||
| progress.start_time = time.time() | ||
| progress.total_steps = 3 # Check, Parse, Confirm | ||
| progress.libraries = [] | ||
|
|
||
| try: | ||
| # Step 1: Check prerequisites | ||
| with self.state_lock: | ||
| progress.current_step = 1 | ||
| progress.current_library = "Checking prerequisites..." | ||
| progress.update_elapsed() | ||
|
|
||
| # Check for API key first | ||
| api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY") | ||
| if not api_key: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = ( | ||
| "No API key found!\n" | ||
| "Set ANTHROPIC_API_KEY or OPENAI_API_KEY in your environment.\n" | ||
| "Run 'cortex wizard' to configure." | ||
| ) | ||
| return | ||
|
|
||
| with self.state_lock: | ||
| if self.stop_event.is_set() or progress.state == InstallationState.FAILED: | ||
| return | ||
|
|
||
| # Step 2: Initialize CLI and get commands | ||
| with self.state_lock: | ||
| progress.current_step = 2 | ||
| progress.current_library = "Planning installation..." | ||
| progress.update_elapsed() | ||
|
|
||
| cli = CortexCLI() | ||
|
|
||
| # Capture CLI output for dry-run | ||
| with io.StringIO() as stdout_capture, io.StringIO() as stderr_capture: | ||
| try: | ||
| with ( | ||
| contextlib.redirect_stdout(stdout_capture), | ||
| contextlib.redirect_stderr(stderr_capture), | ||
| ): | ||
| result = cli.install(package_name, dry_run=True, execute=False) | ||
| except Exception as e: | ||
| result = 1 | ||
| stderr_capture.write(str(e)) | ||
|
|
||
| stdout_output = stdout_capture.getvalue() | ||
| stderr_output = stderr_capture.getvalue() | ||
|
|
||
| with self.state_lock: | ||
| if self.stop_event.is_set() or progress.state == InstallationState.FAILED: | ||
| return | ||
|
|
||
| if result != 0: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| error_msg = stderr_output.strip() or stdout_output.strip() | ||
| import re | ||
|
|
||
| clean_msg = re.sub(r"\[.*?\]", "", error_msg) | ||
| clean_msg = clean_msg.strip() | ||
| if clean_msg: | ||
| lines = clean_msg.split("\n") | ||
| first_line = lines[0].strip()[:80] | ||
| progress.error_message = ( | ||
| first_line or f"Failed to plan install for '{package_name}'" | ||
| ) | ||
| else: | ||
| progress.error_message = f"Failed to plan install for '{package_name}'" | ||
| return | ||
|
|
||
| # Step 3: Extract commands and show confirmation | ||
| with self.state_lock: | ||
| progress.current_step = 3 | ||
| progress.current_library = "Ready for confirmation..." | ||
| progress.update_elapsed() | ||
|
|
||
| # Parse commands from output | ||
| commands = [] | ||
| in_commands_section = False | ||
| for line in stdout_output.split("\n"): | ||
| if line.strip().startswith("Generated commands:"): | ||
| in_commands_section = True | ||
| continue | ||
| if in_commands_section and line.strip(): | ||
| # Commands are formatted as " 1. <command>" | ||
| import re | ||
|
|
||
| match = re.match(r"\s*\d+\.\s*(.+)", line) | ||
| if match: | ||
| commands.append(match.group(1)) | ||
| elif line.startswith("("): | ||
| # End of commands section (dry run mode message) | ||
| break | ||
|
|
||
| with self.state_lock: | ||
| self._pending_commands = commands | ||
| progress.libraries = [f"Package: {package_name}"] | ||
| if commands: | ||
| progress.libraries.append(f"Commands: {len(commands)}") | ||
|
|
||
| # Show confirmation dialog | ||
| progress.state = InstallationState.WAITING_CONFIRMATION | ||
| progress.current_library = "" | ||
|
|
||
| except ImportError as e: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = f"Missing package: {e}" | ||
| except Exception as e: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = f"Error: {str(e)[:80]}" | ||
|
|
||
| def _confirm_installation(self) -> None: | ||
| """User confirmed installation - execute with --execute flag""" | ||
| with self.state_lock: | ||
| self.installation_progress.state = InstallationState.PROCESSING | ||
| self.stop_event.clear() | ||
|
|
||
| threading.Thread(target=self._execute_confirmed_install, daemon=True).start() | ||
|
|
||
| def _execute_confirmed_install(self) -> None: | ||
| """Execute the confirmed installation with execute=True""" | ||
| import contextlib | ||
| import io | ||
|
|
||
| from cortex.cli import CortexCLI | ||
|
|
||
| # Get package name with lock | ||
| with self.state_lock: | ||
| package_name = self.installation_progress.package | ||
|
|
||
| # Initialize progress with lock | ||
| with self.state_lock: | ||
| self.installation_progress.state = InstallationState.IN_PROGRESS | ||
| self.installation_progress.start_time = time.time() | ||
| self.installation_progress.total_steps = 3 # Init, Execute, Complete | ||
| self.installation_progress.current_step = 1 | ||
| self.installation_progress.current_library = "Starting installation..." | ||
| self.installation_progress.update_elapsed() | ||
|
|
||
| try: | ||
| if self.stop_event.is_set(): | ||
| return | ||
|
|
||
| # Step 2: Execute installation | ||
| with self.state_lock: | ||
| self.installation_progress.current_step = 2 | ||
| self.installation_progress.current_library = f"Installing {package_name}..." | ||
| self.installation_progress.update_elapsed() | ||
|
|
||
| cli = CortexCLI() | ||
|
|
||
| # Capture CLI output | ||
| with io.StringIO() as stdout_capture, io.StringIO() as stderr_capture: | ||
| try: | ||
| with ( | ||
| contextlib.redirect_stdout(stdout_capture), | ||
| contextlib.redirect_stderr(stderr_capture), | ||
| ): | ||
| result = cli.install(package_name, dry_run=False, execute=True) | ||
| except Exception as e: | ||
| result = 1 | ||
| stderr_capture.write(str(e)) | ||
|
|
||
| stdout_output = stdout_capture.getvalue() | ||
| stderr_output = stderr_capture.getvalue() | ||
|
|
||
| if self.stop_event.is_set(): | ||
| return | ||
|
|
||
| # Step 3: Complete | ||
| with self.state_lock: | ||
| self.installation_progress.current_step = 3 | ||
| self.installation_progress.current_library = "Finalizing..." | ||
| self.installation_progress.update_elapsed() | ||
|
|
||
| if result == 0: | ||
| self.installation_progress.state = InstallationState.COMPLETED | ||
| self.installation_progress.success_message = ( | ||
| f"✓ Successfully installed '{package_name}'!" | ||
| ) | ||
| else: | ||
| self.installation_progress.state = InstallationState.FAILED | ||
| error_msg = stderr_output.strip() or stdout_output.strip() | ||
| import re | ||
|
|
||
| clean_msg = re.sub(r"\[.*?\]", "", error_msg) | ||
| clean_msg = clean_msg.strip() | ||
| if clean_msg: | ||
| lines = clean_msg.split("\n") | ||
| first_line = lines[0].strip()[:80] | ||
| self.installation_progress.error_message = ( | ||
| first_line or f"Failed to install '{package_name}'" | ||
| ) | ||
| else: | ||
| self.installation_progress.error_message = ( | ||
| f"Installation failed for '{package_name}'" | ||
| ) | ||
|
|
||
| except ImportError as e: | ||
| with self.state_lock: | ||
| self.installation_progress.state = InstallationState.FAILED | ||
| self.installation_progress.error_message = f"Missing package: {e}" | ||
| except Exception as e: | ||
| with self.state_lock: | ||
| self.installation_progress.state = InstallationState.FAILED | ||
| self.installation_progress.error_message = f"Error: {str(e)[:80]}" | ||
| finally: | ||
| with self.state_lock: | ||
| self.installation_progress.current_library = "" | ||
| self._pending_commands = [] | ||
|
|
||
| def _run_real_installation(self) -> None: | ||
| """ | ||
| Run real installation using Cortex CLI. | ||
| Executes in background thread with progress feedback. | ||
| """ | ||
| self.stop_event.clear() | ||
| threading.Thread(target=self._execute_cli_install, daemon=True).start() | ||
|
|
||
| def _execute_cli_install(self) -> None: | ||
| """Execute actual CLI installation in background thread""" | ||
| import contextlib | ||
| import io | ||
|
|
||
| from cortex.cli import CortexCLI | ||
|
|
||
| with self.state_lock: | ||
| progress = self.installation_progress | ||
| package_name = progress.package | ||
| progress.state = InstallationState.IN_PROGRESS | ||
| progress.start_time = time.time() | ||
| progress.total_steps = 4 # Check, Parse, Plan, Complete | ||
| progress.libraries = [] | ||
|
|
||
| try: | ||
| # Step 1: Check prerequisites | ||
| with self.state_lock: | ||
| progress.current_step = 1 | ||
| progress.current_library = "Checking prerequisites..." | ||
| progress.update_elapsed() | ||
|
|
||
| # Check for API key first | ||
| api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY") | ||
| if not api_key: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = ( | ||
| "No API key found!\n" | ||
| "Set ANTHROPIC_API_KEY or OPENAI_API_KEY in your environment.\n" | ||
| "Run 'cortex wizard' to configure." | ||
| ) | ||
| return | ||
|
|
||
| with self.state_lock: | ||
| if self.stop_event.is_set() or progress.state == InstallationState.FAILED: | ||
| return | ||
|
|
||
| # Step 2: Initialize CLI | ||
| with self.state_lock: | ||
| progress.current_step = 2 | ||
| progress.current_library = "Initializing Cortex CLI..." | ||
| progress.update_elapsed() | ||
|
|
||
| cli = CortexCLI() | ||
|
|
||
| with self.state_lock: | ||
| if self.stop_event.is_set() or progress.state == InstallationState.FAILED: | ||
| return | ||
|
|
||
| # Step 3: Run installation (capture output) | ||
| with self.state_lock: | ||
| progress.current_step = 3 | ||
| progress.current_library = f"Planning install for: {package_name}" | ||
| progress.libraries.append(f"Package: {package_name}") | ||
| progress.update_elapsed() | ||
|
|
||
| # Capture CLI output | ||
| with io.StringIO() as stdout_capture, io.StringIO() as stderr_capture: | ||
| try: | ||
| with ( | ||
| contextlib.redirect_stdout(stdout_capture), | ||
| contextlib.redirect_stderr(stderr_capture), | ||
| ): | ||
| result = cli.install(package_name, dry_run=True, execute=False) | ||
| except Exception as e: | ||
| result = 1 | ||
| stderr_capture.write(str(e)) | ||
|
|
||
| stdout_output = stdout_capture.getvalue() | ||
| stderr_output = stderr_capture.getvalue() | ||
|
|
||
| with self.state_lock: | ||
| if self.stop_event.is_set() or progress.state == InstallationState.FAILED: | ||
| return | ||
|
|
||
| # Step 4: Complete | ||
| with self.state_lock: | ||
| progress.current_step = 4 | ||
| progress.current_library = "Finalizing..." | ||
| progress.update_elapsed() | ||
|
|
||
| if result == 0: | ||
| progress.state = InstallationState.COMPLETED | ||
| # Extract generated commands if available | ||
| commands_header = "Generated commands:" | ||
| has_commands_header = any( | ||
| line.strip().startswith(commands_header) | ||
| for line in stdout_output.splitlines() | ||
| ) | ||
| if has_commands_header: | ||
| progress.success_message = ( | ||
| f"✓ Plan ready for '{package_name}'!\n" | ||
| "Run in terminal: cortex install " + package_name + " --execute" | ||
| ) | ||
| else: | ||
| progress.success_message = ( | ||
| f"Dry-run complete for '{package_name}'!\n" | ||
| "Run 'cortex install <pkg> --execute' in terminal to apply." | ||
| ) | ||
| else: | ||
| progress.state = InstallationState.FAILED | ||
| # Try to extract meaningful error from output | ||
| error_msg = stderr_output.strip() or stdout_output.strip() | ||
| # Remove Rich formatting characters for cleaner display | ||
| import re | ||
|
|
||
| clean_msg = re.sub(r"\[.*?\]", "", error_msg) # Remove [color] tags | ||
| clean_msg = re.sub(r" CX.*?[│✗✓⠋]", "", clean_msg) # Remove CX prefix | ||
| clean_msg = clean_msg.strip() | ||
|
|
||
| if "doesn't look valid" in clean_msg or "wizard" in clean_msg.lower(): | ||
| progress.error_message = ( | ||
| "API key invalid. Run 'cortex wizard' to configure." | ||
| ) | ||
| elif "not installed" in clean_msg.lower() and "openai" in clean_msg.lower(): | ||
| progress.error_message = "OpenAI not installed. Run: pip install openai" | ||
| elif "not installed" in clean_msg.lower() and "anthropic" in clean_msg.lower(): | ||
| progress.error_message = ( | ||
| "Anthropic not installed. Run: pip install anthropic" | ||
| ) | ||
| elif "API key" in error_msg or "api_key" in error_msg.lower(): | ||
| progress.error_message = "API key not configured. Run 'cortex wizard'" | ||
| elif clean_msg: | ||
| # Show cleaned error, truncated | ||
| lines = clean_msg.split("\n") | ||
| first_line = lines[0].strip()[:80] | ||
| progress.error_message = first_line or f"Failed to install '{package_name}'" | ||
| else: | ||
| progress.error_message = f"Failed to plan install for '{package_name}'" | ||
|
|
||
| except ImportError as e: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = f"Missing package: {e}" | ||
| except Exception as e: | ||
| with self.state_lock: | ||
| progress.state = InstallationState.FAILED | ||
| progress.error_message = f"Error: {str(e)[:80]}" | ||
| finally: | ||
| with self.state_lock: | ||
| progress.current_library = "" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement Firejail sandboxing and audit logging for installation operations.
Dashboard installs invoke CortexCLI.install() without Firejail sandboxing or audit logging. Searches of the codebase reveal no implementation of either required feature:
- No Firejail usage found anywhere in the repository
- No history.db audit logging mechanism implemented
- Installation calls proceed directly without security isolation
Per coding guidelines, add sandboxing via Firejail for untrusted code execution and implement audit logging to ~/.cortex/history.db. This must be implemented in the CortexCLI.install() method or as a wrapper around its invocation in dashboard.py.
🤖 Prompt for AI Agents
In `@cortex/dashboard.py` around lines 1217 - 1685, Dashboard calls to
CortexCLI.install (invoked from _execute_dry_run, _execute_confirmed_install,
and _execute_cli_install) run un-sandboxed and aren't audit-logged; wrap or
modify CortexCLI.install to run installs under Firejail and record each install
attempt into ~/.cortex/history.db (timestamp, user, package, commands run,
dry_run/execute flags, result code, truncated stdout/stderr, and whether
Firejail was used). Implement the sandboxing so when executing commands (real
execute=True) they are prefixed with a Firejail invocation (fall back gracefully
if firejail is unavailable), ensure CLI dry-run still returns the planned
commands unmodified, and add a small, thread-safe SQLite logger (use a single DB
file ~/.cortex/history.db with a simple installs table) that is called after
each install attempt (both success and failure) from the wrapper or inside
CortexCLI.install; ensure code paths in _execute_dry_run,
_execute_confirmed_install, and _execute_cli_install use the new wrapper/updated
CortexCLI.install so all install activity is sandboxed and logged.
| # Find active GPU - prefer vendor match for current mode first | ||
| # Map modes to preferred vendors | ||
| mode_vendor_map = { | ||
| GPUMode.NVIDIA: {GPUVendor.NVIDIA}, | ||
| GPUMode.INTEGRATED: {GPUVendor.INTEL, GPUVendor.AMD}, | ||
| } | ||
|
|
||
| preferred_vendors = mode_vendor_map.get(state.mode, set()) | ||
|
|
||
| # First pass: find vendor-matching device | ||
| for device in state.devices: | ||
| if device.is_active or (state.mode == GPUMode.NVIDIA and device.vendor == GPUVendor.NVIDIA): | ||
| state.active_gpu = device | ||
| break | ||
| elif state.mode == GPUMode.INTEGRATED and device.vendor in [GPUVendor.INTEL, GPUVendor.AMD]: | ||
| if device.vendor in preferred_vendors: | ||
| state.active_gpu = device | ||
| break | ||
|
|
||
| # Second pass: if no vendor match, fall back to any active device | ||
| if state.active_gpu is None: | ||
| for device in state.devices: | ||
| if device.is_active: | ||
| state.active_gpu = device | ||
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Active GPU selection can pick an inactive device.
Line 283 picks a vendor-matching GPU without checking device.is_active, so a powered‑off GPU can be marked active and the fallback pass never runs. Consider restricting the first pass to active devices (or checking power_state) so the “active” GPU is truly active.
🐛 Suggested fix
- for device in state.devices:
- if device.vendor in preferred_vendors:
+ for device in state.devices:
+ if device.vendor in preferred_vendors and device.is_active:
state.active_gpu = device
break🤖 Prompt for AI Agents
In `@cortex/gpu_manager.py` around lines 273 - 293, The first-pass vendor-matching
loop (where state.active_gpu is set) doesn't verify the device is actually
powered/active, so a vendor match can select an inactive GPU and prevent the
fallback; update the vendor match in the GPU selection logic to require
device.is_active (or equivalent power_state check) before assigning
state.active_gpu (referencing state.devices, device.vendor, device.is_active,
state.active_gpu, GPUMode and GPUVendor) so only active devices are considered
in the first pass.
| # Also write to audit database | ||
| try: | ||
| import sqlite3 | ||
| from pathlib import Path | ||
|
|
||
| audit_db_path = Path.home() / ".cortex" / "history.db" | ||
| audit_db_path.parent.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| with sqlite3.connect(str(audit_db_path)) as conn: | ||
| cursor = conn.cursor() | ||
|
|
||
| # Create health_checks table if it doesn't exist | ||
| cursor.execute( | ||
| """ | ||
| CREATE TABLE IF NOT EXISTS health_checks ( | ||
| timestamp TEXT NOT NULL, | ||
| overall_score INTEGER NOT NULL, | ||
| factors TEXT NOT NULL | ||
| ) | ||
| """ | ||
| ) | ||
|
|
||
| # Insert health check record | ||
| cursor.execute( | ||
| """ | ||
| INSERT INTO health_checks VALUES (?, ?, ?) | ||
| """, | ||
| ( | ||
| entry["timestamp"], | ||
| entry["overall_score"], | ||
| json.dumps(entry["factors"]), | ||
| ), | ||
| ) | ||
|
|
||
| conn.commit() | ||
| except OSError: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix exception handling and remove redundant import.
Two issues in this audit logging block:
-
Redundant import (Line 496):
Pathis already imported at module level (line 14). -
Narrow exception handling (Line 528): Only
OSErroris caught, butsqlite3can raisesqlite3.Errorand its subclasses (OperationalError,IntegrityError,DatabaseError). An unhandled exception here would cause the health check to fail even though the primary JSON history was saved successfully.
🐛 Proposed fix
# Also write to audit database
try:
import sqlite3
- from pathlib import Path
audit_db_path = Path.home() / ".cortex" / "history.db"
audit_db_path.parent.mkdir(parents=True, exist_ok=True)
with sqlite3.connect(str(audit_db_path)) as conn:
cursor = conn.cursor()
# Create health_checks table if it doesn't exist
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS health_checks (
timestamp TEXT NOT NULL,
overall_score INTEGER NOT NULL,
factors TEXT NOT NULL
)
"""
)
# Insert health check record
cursor.execute(
"""
INSERT INTO health_checks VALUES (?, ?, ?)
""",
(
entry["timestamp"],
entry["overall_score"],
json.dumps(entry["factors"]),
),
)
conn.commit()
- except OSError:
+ except (OSError, sqlite3.Error):
pass🤖 Prompt for AI Agents
In `@cortex/health_score.py` around lines 493 - 529, Remove the redundant local
"from pathlib import Path" import and rely on the module-level Path import, and
broaden the exception handling around the audit DB block that uses
sqlite3.connect/health_checks/audit_db_path so it catches sqlite3.Error (and
subclasses) in addition to OSError (e.g., except (OSError, sqlite3.Error) as e)
to avoid crashing the health check when DB errors occur; keep the existing
swallow/quiet behavior (or log the exception) and do not let sqlite3 errors
propagate out of the audit logging code.
| from cortex.dashboard import ( | ||
| ACTION_MAP, | ||
| BAR_WIDTH, | ||
| BYTES_PER_GB, | ||
| CRITICAL_THRESHOLD, | ||
| CommandHistory, | ||
| DashboardApp, | ||
| DashboardTab, | ||
| InstallationProgress, | ||
| InstallationState, | ||
| ModelLister, | ||
| ProcessLister, | ||
| SystemMetrics, | ||
| SystemMonitor, | ||
| UIRenderer, | ||
| ) | ||
|
|
||
|
|
||
| class TestSystemMonitor(unittest.TestCase): | ||
| """Test SystemMonitor class with explicit-intent pattern.""" | ||
|
|
||
| def test_init_no_auto_collection(self): | ||
| """Metrics should be zero before enabling - no auto-collection.""" | ||
| monitor = SystemMonitor() | ||
| metrics = monitor.get_metrics() | ||
| self.assertEqual(metrics.cpu_percent, 0.0) | ||
| self.assertEqual(metrics.ram_percent, 0.0) | ||
| self.assertFalse(monitor._monitoring_enabled) | ||
|
|
||
| def test_enable_monitoring(self): | ||
| """Enabling monitoring should set the flag.""" | ||
| monitor = SystemMonitor() | ||
| monitor.enable_monitoring() | ||
| self.assertTrue(monitor._monitoring_enabled) | ||
|
|
||
| def test_update_metrics_when_enabled(self): | ||
| """Metrics should be populated after enabling and updating with actual system values.""" | ||
| monitor = SystemMonitor() | ||
| monitor.enable_monitoring() | ||
| monitor.update_metrics() | ||
| metrics = monitor.get_metrics() | ||
|
|
||
| # Verify metrics are populated with real system values | ||
| self.assertGreaterEqual(metrics.cpu_percent, 0) | ||
| self.assertGreaterEqual(metrics.ram_percent, 0) | ||
| self.assertGreater(metrics.ram_used_gb, 0) | ||
| self.assertGreater(metrics.ram_total_gb, 0) | ||
|
|
||
| # Verify RAM values are reasonable | ||
| self.assertLess(metrics.ram_used_gb, metrics.ram_total_gb) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the dashboard module and check if PSUTIL_AVAILABLE is exported
find . -type f -name "dashboard.py" | head -5Repository: cortexlinux/cortex
Length of output: 83
🏁 Script executed:
# Search for PSUTIL_AVAILABLE in the codebase
rg "PSUTIL_AVAILABLE" --type pyRepository: cortexlinux/cortex
Length of output: 260
🏁 Script executed:
# Check the SystemMonitor implementation in dashboard
rg "class SystemMonitor" -A 30 --type pyRepository: cortexlinux/cortex
Length of output: 1956
🏁 Script executed:
# Check if PSUTIL_AVAILABLE is in __all__ or can be imported
rg "^__all__" -A 20 cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check the update_metrics implementation
rg "def update_metrics" -A 20 cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 807
🏁 Script executed:
# Check the current imports in test_dashboard.py
head -30 tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 691
🏁 Script executed:
# Search for CI configuration and any test failures related to psutil
find . -type f \( -name "*.yml" -o -name "*.yaml" -o -name ".github" \) | head -10Repository: cortexlinux/cortex
Length of output: 395
🏁 Script executed:
# Check what's exported from cortex/dashboard.py - look at the entire __all__ or top-level exports
head -100 cortex/dashboard.py | grep -E "^(import|from|__all__|PSUTIL_AVAILABLE)" -A 2Repository: cortexlinux/cortex
Length of output: 285
🏁 Script executed:
# Check if PSUTIL_AVAILABLE can be imported by checking the exact context where it's defined
rg "PSUTIL_AVAILABLE\s*=" -B 5 -A 5 cortex/dashboard.py | head -20Repository: cortexlinux/cortex
Length of output: 282
🏁 Script executed:
# Check the complete test file to see test structure
wc -l tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 89
🏁 Script executed:
# Look for actual CI failures or test skip patterns
find . -type f -name "*.md" | xargs grep -l "psutil\|PSUTIL" 2>/dev/null | head -5Repository: cortexlinux/cortex
Length of output: 96
🏁 Script executed:
# Check if PSUTIL_AVAILABLE is in __all__ or just defined at module level
grep -n "__all__" cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# See the complete context of PSUTIL_AVAILABLE definition
rg "PSUTIL_AVAILABLE" -B 10 -A 2 cortex/dashboard.py | head -30Repository: cortexlinux/cortex
Length of output: 936
🏁 Script executed:
# Check the current test around line 20-70 to see the exact code
sed -n '20,70p' tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 1720
🏁 Script executed:
# Search for any issues or PRs mentioning psutil failures
find . -type f -name "*.md" -o -name "*.txt" | xargs grep -l "psutil\|test.*fail" 2>/dev/null | head -5Repository: cortexlinux/cortex
Length of output: 214
🏁 Script executed:
# Check if PSUTIL_AVAILABLE is actually accessible/importable from cortex.dashboard
python3 << 'EOF'
import sys
import os
sys.path.insert(0, '.')
try:
from cortex.dashboard import PSUTIL_AVAILABLE
print(f"PSUTIL_AVAILABLE can be imported: {PSUTIL_AVAILABLE}")
except ImportError as e:
print(f"Cannot import PSUTIL_AVAILABLE: {e}")
EOFRepository: cortexlinux/cortex
Length of output: 148
🏁 Script executed:
# Check if there are other tests that skip when psutil is unavailable
rg "skipTest\|skip\|psutil" tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check the update_metrics method more completely to understand the psutil dependency
rg "def update_metrics" -A 40 cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 1711
🏁 Script executed:
# Check test coverage requirements and current test count
grep -r "coverage\|pytest" pyproject.toml setup.py setup.cfg 2>/dev/null | head -10Repository: cortexlinux/cortex
Length of output: 382
🏁 Script executed:
# Verify PSUTIL_AVAILABLE is at module level and can be imported
sed -n '1,100p' cortex/dashboard.py | grep -n "PSUTIL_AVAILABLE"Repository: cortexlinux/cortex
Length of output: 124
🏁 Script executed:
# Check if there's an __all__ definition that would restrict exports
rg "^__all__\s*=" cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check the entire test file to see if there are any other conditional skips or psutil handling
grep -n "skipTest\|skip\|unittest.skip\|psutil\|PSUTIL" tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check if PSUTIL_AVAILABLE is a top-level export by looking at dashboard module structure
head -50 cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 1341
🏁 Script executed:
# Verify the exact lines around PSUTIL_AVAILABLE to confirm it's at module level
sed -n '35,45p' cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 248
🏁 Script executed:
# Check the full test file structure to understand test count and coverage needs
grep -n "def test_" tests/test_dashboard.py | wc -lRepository: cortexlinux/cortex
Length of output: 64
🏁 Script executed:
# Look for pytest marks or decorators that might handle skip conditions
grep -n "@\|def test_" tests/test_dashboard.py | head -20Repository: cortexlinux/cortex
Length of output: 1042
🏁 Script executed:
# Verify that PSUTIL_AVAILABLE is importable from cortex.dashboard by checking module structure
python3 << 'EOF'
import sys
sys.path.insert(0, '.')
# Test if we can import the constant
try:
import cortex.dashboard as db
print(f"Module attributes containing PSUTIL: {[x for x in dir(db) if 'PSUTIL' in x]}")
print(f"PSUTIL_AVAILABLE value: {getattr(db, 'PSUTIL_AVAILABLE', 'NOT FOUND')}")
except Exception as e:
print(f"Error: {e}")
EOFRepository: cortexlinux/cortex
Length of output: 149
🏁 Script executed:
# Check what happens in update_metrics when psutil is None - look for the actual code
sed -n '55,100p' cortex/dashboard.py | head -50Repository: cortexlinux/cortex
Length of output: 1765
🏁 Script executed:
# Verify if there's any protection against None psutil in update_metrics
rg "if.*psutil\|if.*PSUTIL_AVAILABLE" -B 2 -A 2 cortex/dashboard.py | head -30Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check if the proposed diff is syntactically correct by checking the imports section fully
head -20 tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 496
🏁 Script executed:
# Verify if SystemMonitor.enable_monitoring and update_metrics call psutil safely
rg "def enable_monitoring|def __init__" -A 10 cortex/dashboard.py | grep -A 10 "class SystemMonitor" | head -20Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check if there are CI/test runs that document psutil failures
find . -type f \( -name "*.log" -o -name "*.txt" \) -path "*test*" -o -path "*ci*" 2>/dev/null | head -5Repository: cortexlinux/cortex
Length of output: 88
🏁 Script executed:
# Verify the test coverage configuration to understand the 80% requirement
grep -A 10 "coverage.report" pyproject.tomlRepository: cortexlinux/cortex
Length of output: 315
🏁 Script executed:
# Check the complete update_metrics implementation to see if it checks PSUTIL_AVAILABLE
rg "def update_metrics" -A 50 cortex/dashboard.py | head -60Repository: cortexlinux/cortex
Length of output: 1977
🏁 Script executed:
# Check if there's a check for PSUTIL_AVAILABLE at the start of update_metrics
sed -n '350,450p' cortex/dashboard.py 2>/dev/null | grep -n "def update_metrics\|PSUTIL_AVAILABLE\|psutil\."Repository: cortexlinux/cortex
Length of output: 197
🏁 Script executed:
# Look at the actual CI workflow to understand test environment
head -50 .github/workflows/ci.ymlRepository: cortexlinux/cortex
Length of output: 1118
🏁 Script executed:
# Final verification - check the exact implementation of update_metrics to see if PSUTIL_AVAILABLE is checked
rg "def update_metrics" -B 5 -A 15 cortex/dashboard.py | head -25Repository: cortexlinux/cortex
Length of output: 801
🏁 Script executed:
# Verify if psutil is guarded anywhere before use in update_metrics
rg "if not PSUTIL_AVAILABLE\|if PSUTIL_AVAILABLE" cortex/dashboard.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check the test coverage report configuration in detail
cat pyproject.toml | grep -A 20 "coverage"Repository: cortexlinux/cortex
Length of output: 596
🏁 Script executed:
# Verify test count and ensure 80% coverage is reasonable
wc -l tests/test_dashboard.pyRepository: cortexlinux/cortex
Length of output: 89
🏁 Script executed:
# Final check - verify that the test actually would fail without the skip by looking at what happens when psutil is None
python3 << 'EOF'
# Simulate what happens if update_metrics is called without psutil
psutil = None
try:
# This is what update_metrics tries to do:
result = psutil.cpu_percent(interval=None)
except AttributeError as e:
print(f"Error that would occur: {e}")
print("Test would fail with AttributeError: 'NoneType' object has no attribute 'cpu_percent'")
EOFRepository: cortexlinux/cortex
Length of output: 220
Add conditional skip for test_update_metrics_when_enabled when psutil is unavailable.
The test calls update_metrics(), which directly uses psutil without checking PSUTIL_AVAILABLE. When psutil is not installed, this causes AttributeError: 'NoneType' object has no attribute 'cpu_percent'. Import PSUTIL_AVAILABLE from cortex.dashboard and skip the test when it's unavailable.
Proposed fix
from cortex.dashboard import (
ACTION_MAP,
BAR_WIDTH,
BYTES_PER_GB,
CRITICAL_THRESHOLD,
+ PSUTIL_AVAILABLE,
CommandHistory,
DashboardApp,
@@
def test_update_metrics_when_enabled(self):
"""Metrics should be populated after enabling and updating with actual system values."""
+ if not PSUTIL_AVAILABLE:
+ self.skipTest("psutil not available in test environment")
monitor = SystemMonitor()
monitor.enable_monitoring()
monitor.update_metrics()🧰 Tools
🪛 GitHub Actions: Cortex Automation
[error] 65-65: TestSystemMonitor::test_update_metrics_when_enabled failed due to Metrics error in Cortex dashboard.
🤖 Prompt for AI Agents
In `@tests/test_dashboard.py` around lines 20 - 70, Import PSUTIL_AVAILABLE from
cortex.dashboard and modify the test_update_metrics_when_enabled method in
TestSystemMonitor to skip when PSUTIL_AVAILABLE is False; for example, use
unittest.skipUnless(PSUTIL_AVAILABLE, ...) or an early self.skipTest call at the
top of test_update_metrics_when_enabled so update_metrics() is only exercised
when psutil is available.


Summary
cortex installcommand to install any packages as intended.benchanddoctorcommand.Type of Change
Checklist
Testing
run following command -
cortex dashboardto test as user. or run Pytest to test everything or specify test_dashboard.pyVideo
https://drive.google.com/file/d/1jTAn2CdTlGjoLdYTdKKVpSEvrVD1qwSj/view?usp=sharing
note
This is a good feature but is slow, it can be optimized with less GUI and UX but for now it works
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.