-
-
Notifications
You must be signed in to change notification settings - Fork 50
feat: Interactive TUI Dashboard #620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Removed redundant import of CortexCLI before instantiation.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a terminal-based Cortex Dashboard and related monitoring subsystems, introduces install JSON output and install/health audit logging to ~/.cortex/history.db, enhances GPU and hardware detection, updates dependencies and tests, and wires a new Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as cortex/cli.py
participant Dashboard as cortex.dashboard:DashboardApp
participant Monitor as SystemMonitor
participant Lister as ProcessLister
participant Model as ModelLister
participant UI as UIRenderer
User->>CLI: cortex dashboard
CLI->>Dashboard: DashboardApp.run()
Dashboard->>Monitor: enable_monitoring()
Dashboard->>Lister: enable()
Dashboard->>Model: enable()
Dashboard->>UI: run()
loop periodic update
UI->>Monitor: get_metrics()
Monitor-->>UI: SystemMetrics
UI->>Lister: get_processes()
Lister-->>UI: processes
UI->>Model: get_models()
Model-->>UI: models
UI->>User: render frame
User->>UI: keypress
alt quit
UI->>Monitor: shutdown_gpu()
UI-->>CLI: exit code
else action
UI->>Dashboard: handle_action()
end
end
sequenceDiagram
participant Install as CortexCLI.install
participant Config as ConfigManager
participant Audit as ~/.cortex/history.db
participant Exec as Command Executor
Install->>Config: prepare install (sandbox/direct)
Config->>Audit: _log_install_audit(initial)
alt json_output and not executing
Install-->>User: return JSON summary
else execute
Install->>Exec: run install commands
Exec-->>Install: results
Config->>Audit: _log_install_audit(final)
Install-->>User: return result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification PassedAll contributors have signed the CLA.
|
Summary of ChangesHello @Sahilbhatane, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant new feature: an interactive Terminal User Interface (TUI) dashboard for the Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds an interactive TUI dashboard for real-time system monitoring with keyboard navigation, integrated cortex install workflow, and health/benchmark checks accessible via the cortex dashboard command.
Changes:
- New interactive terminal UI dashboard with real-time system metrics (CPU, RAM, GPU), process monitoring, and command history
- Integration with existing
cortex install,bench, anddoctorcommands for package management and system diagnostics - Privacy-focused explicit-intent pattern: no auto-collection of system data until user explicitly enables monitoring
Reviewed changes
Copilot reviewed 14 out of 16 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| cortex/dashboard.py | Core dashboard implementation with SystemMonitor, ProcessLister, ModelLister, UIRenderer, and DashboardApp classes |
| cortex/cli.py | Added dashboard() method and CLI argument parser integration |
| tests/test_dashboard.py | Comprehensive test suite covering all dashboard components and explicit-intent patterns |
| cortex/hardware_detection.py | Refactored exception handling from bare except: to except Exception: and extracted GPU parsing logic |
| cortex/gpu_manager.py | Improved GPU detection with vendor-matching logic for hybrid GPU systems |
| cortex/health_score.py | Added SQLite audit logging for health check history |
| cortex/config_manager.py | Enhanced install audit logging and added fallback logic for missing sandbox executor |
| requirements.txt | Added psutil and nvidia-ml-py dependencies for system monitoring |
| requirements-dev.txt | New development dependencies file |
| pyproject.toml | Excluded myenv virtual environment from linting |
| .gitignore | Added .cursor/ IDE directory |
| .github/workflows/ci.yml | Updated black exclusion pattern to include myenv |
| .github/workflows/automation.yml | Added fail-fast: false, pip caching, Python version updates, and formatting improvements |
| docs/DASHBOARD_IMPLEMENTATION.md | Comprehensive implementation guide and testing documentation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant new feature: an interactive TUI dashboard for system monitoring. The implementation is extensive and well-structured, including new tests and documentation. Additionally, it adds audit logging for installations and health checks, and refactors hardware detection logic for better robustness. While the new dashboard is impressive, there are a few areas for improvement. I've identified some dead code, fragile stdout parsing that could be made more robust, and repeated imports within the new dashboard module. I've also noted a couple of places where exceptions are silently ignored, which could hide bugs, and a duplicated dependency in requirements.txt. My feedback aims to enhance the maintainability and reliability of this new feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/config_manager.py (2)
839-855: Sandboxed installs aren’t being audited.When sandboxing is available, installs are executed without an audit record. Add an audit entry before executing so successful sandboxed installs are captured. As per coding guidelines, ensure all operations are logged to
~/.cortex/history.dbfor audit purposes.📝 Suggested addition
elif source == self.SOURCE_NPM: command = ( f"npm install -g {name}@{version}" if version else f"npm install -g {name}" ) else: return False + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=True, + is_direct=False, + escalation_consent=False, + ) result = self.sandbox_executor.execute(command) return result.success
947-974: Direct installs run sudo without explicit confirmation or dry-run default.This executes privileged operations immediately, which conflicts with the default dry-run requirement and explicit confirmation for privilege escalation. Consider adding an
execute/confirmedgate and default to dry-run/log-only unless the user explicitly confirms. As per coding guidelines, require explicit user confirmation for sudo and keep install operations dry-run by default.
🤖 Fix all issues with AI agents
In `@cortex/dashboard.py`:
- Around line 311-355: The update_metrics and update_processes methods currently
call psutil unguarded; add an early return at the top of each (similar to main()
and run()) that checks PSUTIL_AVAILABLE and returns immediately if false to
avoid AttributeError when psutil failed to import; update_metrics should check
PSUTIL_AVAILABLE before using psutil.cpu_percent/virtual_memory and
update_processes should check it before calling any psutil functions, keeping
existing monitoring-enabled checks and locking logic intact.
In `@cortex/health_score.py`:
- Around line 493-529: The current block catches only OSError so
sqlite3-specific exceptions will propagate; move the sqlite3 import to the file
top (per PEP8) and change the exception handling around the audit DB block to
catch sqlite3.Error as well (e.g., except (OSError, sqlite3.Error) as e) so
connection/execute/commit failures (OperationalError, DatabaseError, etc.) are
handled; keep the existing behavior (silently skipping or logging) but ensure
sqlite3 is referenced by its top-level import and that
audit_db_path/health_checks insert logic remains unchanged.
In `@docs/DASHBOARD_IMPLEMENTATION.md`:
- Around line 51-544: Update the fenced code blocks in
DASHBOARD_IMPLEMENTATION.md to include language identifiers (e.g., ```bash,
```python, ```text) so markdownlint MD040 is satisfied; specifically add
languages to the file-structure block, dependency lists, SystemMetrics dataclass
block, Core Components method examples
(start/stop/get_metrics/_collect_metrics), all code examples under Testing
Strategy (test functions), installation/usage commands, and API examples
(monitor = SystemMonitor(...), lister = ProcessLister(...), renderer =
UIRenderer(...), app = DashboardApp(...)); run the linter to confirm no
remaining MD040 violations and keep language choices consistent (bash for shell
commands, python for code, text for plain output).
- Around line 91-115: Docs reference functions and return values that don't
exist; update the documentation to match the current implementation by:
replacing references to SystemMonitor.start/stop with the actual methods
enable_monitoring() and update_metrics() (or whichever SystemMonitor methods are
present), change UIRenderer.render to UIRenderer.run, and correct the
CommandHistory.load_history description to state that load_history() returns
None (or update the implementation if you intend it to return a list) and adjust
examples accordingly; also correct the test path from test/test_dashboard.py to
tests/test_dashboard.py and apply the same name/behavior fixes in the other
sections called out (150-227, 252-367, 480-543) so all API names (SystemMonitor,
UIRenderer, CommandHistory, DashboardApp) and return contracts in the docs match
the code.
- Around line 731-733: Update the three bullet lines for "Rich Library",
"psutil", and "NVIDIA NVML (nvidia-ml-py)" to use Markdown link syntax instead
of bare URLs; e.g., replace the raw URL after each library name with a bracketed
link text and parenthesized URL (e.g., change "Rich Library:
https://rich.readthedocs.io/" to "Rich Library:
[https://rich.readthedocs.io/](https://rich.readthedocs.io/)") so markdownlint
MD034 no longer flags bare URLs.
In `@requirements.txt`:
- Around line 8-9: The requirements file contains duplicate entries for the same
package with conflicting constraints; remove the redundant lines so only one
canonical PyYAML entry remains (normalize casing to "PyYAML") and choose a
single version specifier (either a strict "==6.0.3" or a range ">=6.0.0") —
delete the other two entries (the lowercase "pyyaml" and the other "PyYAML"
line) so only the chosen PyYAML specification is present and there are no
conflicting constraints.
- Around line 27-29: Add psutil>=5.9.0 and nvidia-ml-py>=12.0.0 to
pyproject.toml so installs via pip include the dashboard dependencies used in
cortex/dashboard.py (psutil for CPU/memory and pynvml from nvidia-ml-py for
GPU). Update the [project.optional-dependencies] section (or
[project.dependencies] if you decide dashboard is core) to include
"psutil>=5.9.0" and "nvidia-ml-py>=12.0.0" (e.g., optional group named
dashboard) and ensure the package metadata is valid so pip install cortex-linux
pulls them when requested.
In `@tests/test_dashboard.py`:
- Around line 321-329: Move the local import inside the test to the module
level: remove the in-function "import time" from
test_installation_progress_update_elapsed and add a top-of-file "import time"
alongside the other imports so the test continues to call
InstallationProgress().start_time and update_elapsed() unchanged; reference the
test method name test_installation_progress_update_elapsed and the
InstallationProgress class and update_elapsed() method to locate the change.
- Around line 55-69: The test test_update_metrics_when_enabled is flaky because
it relies on real psutil calls; change it to patch psutil.cpu_percent and
psutil.virtual_memory so SystemMonitor.update_metrics() receives deterministic
non-zero values; using unittest.mock.patch (or patch.object) around the call to
monitor.update_metrics(), return a positive cpu_percent and a
virtual_memory-like object with used/total and percent non-zero, then call
monitor.get_metrics() and run the same assertions—this ensures
SystemMonitor.update_metrics() and get_metrics() are exercised deterministically
without hitting the host environment.
In `@tests/test_stdin_handler.py`:
- Line 9: Remove the unused MagicMock import from the tests import line: update
the import that currently reads "from unittest.mock import MagicMock, patch" to
only import patch so only the used symbol (patch) remains; ensure no other
references to MagicMock exist in the file (e.g., in tests using patch) before
committing the change.
♻️ Duplicate comments (1)
cortex/config_manager.py (1)
821-828: Audit entry should not mark this as dry-run.This is a real install attempt (even if blocked), so
is_dry_runshould be false to keep audit records accurate.🐛 Proposed fix
- is_dry_run=True, + is_dry_run=False,
🧹 Nitpick comments (4)
requirements-dev.txt (1)
1-12: LGTM with a minor note on version alignment.The development dependencies file is properly structured and references the base requirements. There are minor version specification differences from
pyproject.toml(e.g.,pytest-timeout>=2.3.1here vs>=2.3.0,isort>=5.13.0vs>=5.0.0), but these are compatible and won't cause issues.Consider keeping version constraints consistent between
requirements-dev.txtandpyproject.tomlfor maintainability.cortex/dashboard.py (1)
1696-1735: Consider locking around simulated progress updates.The simulation thread mutates shared state without
state_lockwhile the UI reads it; wrapping updates with the lock would align with the thread-safety design.tests/test_dashboard.py (2)
135-145: Ensure mock is exercised by verifying call.If
REQUESTS_AVAILABLEisFalsein the test environment,check_ollama()returnsFalseearly without callingrequests.get, making the mock ineffective. Add an assertion that the mock was called, or patchREQUESTS_AVAILABLE.♻️ Proposed fix
`@patch`("cortex.dashboard.requests.get") - def test_check_ollama_available(self, mock_get): + `@patch`("cortex.dashboard.REQUESTS_AVAILABLE", True) + def test_check_ollama_available(self, mock_get): """Should detect when Ollama is running.""" mock_response = MagicMock() mock_response.status_code = 200 mock_get.return_value = mock_response lister = ModelLister() result = lister.check_ollama() self.assertTrue(result) self.assertTrue(lister.ollama_available) + mock_get.assert_called_once()
157-177: Same concern: patchREQUESTS_AVAILABLEfor reliable mock testing.Apply the same fix to ensure the mock is exercised.
♻️ Proposed fix
`@patch`("cortex.dashboard.requests.get") - def test_update_models_success(self, mock_get): + `@patch`("cortex.dashboard.REQUESTS_AVAILABLE", True) + def test_update_models_success(self, mock_get): """Should parse Ollama API response correctly.""" ...
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
.github/workflows/automation.yml.github/workflows/ci.yml.gitignorecortex/cli.pycortex/config_manager.pycortex/dashboard.pycortex/gpu_manager.pycortex/hardware_detection.pycortex/health_score.pydocs/DASHBOARD_IMPLEMENTATION.mdpyproject.tomlrequirements-dev.txtrequirements.txttests/test_dashboard.pytests/test_interpreter.pytests/test_stdin_handler.py
💤 Files with no reviewable changes (1)
- tests/test_interpreter.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/health_score.pycortex/hardware_detection.pycortex/gpu_manager.pytests/test_stdin_handler.pytests/test_dashboard.pycortex/config_manager.pycortex/dashboard.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/test_stdin_handler.pytests/test_dashboard.py
🧠 Learnings (3)
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Add documentation for all new features
Applied to files:
docs/DASHBOARD_IMPLEMENTATION.md
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Log all operations to ~/.cortex/history.db for audit purposes
Applied to files:
cortex/health_score.pycortex/config_manager.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Use dry-run mode by default for all installation operations
Applied to files:
cortex/config_manager.py
🧬 Code graph analysis (4)
tests/test_dashboard.py (1)
cortex/dashboard.py (23)
CommandHistory(552-612)DashboardApp(1859-1910)DashboardTab(135-139)InstallationProgress(201-227)ModelLister(450-544)ProcessLister(363-442)SystemMetrics(184-197)SystemMonitor(247-355)UIRenderer(620-1851)get_metrics(306-309)enable_monitoring(280-282)update_metrics(311-355)enable(402-404)enable(466-468)update_processes(406-437)check_ollama(470-480)update_models(482-510)get_history(609-612)add_command(603-607)_render_header(679-693)_render_resources(695-713)update_elapsed(216-227)main(1913-1921)
cortex/config_manager.py (1)
cortex/sandbox/sandbox_executor.py (1)
execute(501-633)
cortex/dashboard.py (2)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)
cortex/cli.py (1)
cortex/dashboard.py (3)
DashboardApp(1859-1910)run(1770-1851)run(1884-1910)
🪛 GitHub Actions: CI
cortex/dashboard.py
[error] 319-355: TestSystemMonitor::test_update_metrics_when_enabled failed. Runtime error during metrics collection: AttributeError: 'NoneType' object has no attribute 'cpu_percent'. The log indicates 'Metrics error' at cortex.dashboard:355 and the traceback shows psutil.cpu_percent(...) access failing likely due to an uninitialized/None psutil reference.
🪛 GitHub Actions: Cortex Automation
tests/test_dashboard.py
[error] 65-65: TestSystemMonitor.test_update_metrics_when_enabled failed: Assertion 'metrics.ram_used_gb > 0' is False (ram_used_gb = 0.0).
cortex/dashboard.py
[error] 319-355: Metrics error: 'NoneType' object has no attribute 'cpu_percent'. Traceback indicates psutil.cpu_percent call in update_metrics raised AttributeError; likely psutil module is None or incorrectly initialized.
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~757-~757: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
93-93: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
275-275: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
296-296: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
316-316: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
338-338: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
357-357: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
706-706: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
731-731: Bare URL used
(MD034, no-bare-urls)
732-732: Bare URL used
(MD034, no-bare-urls)
733-733: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (24)
pyproject.toml (1)
120-120: LGTM!Adding
myenvto the exclusion patterns for both Black and Ruff is consistent with the existingvenvexclusions and aligns with the.gitignoreentry. This ensures virtual environment files are properly ignored by code formatters/linters.Also applies to: 151-151
.gitignore (1)
121-121: LGTM!Adding
.cursor/to the IDE exclusions is appropriate. Cursor is an AI-powered code editor, and its configuration directory should be excluded from version control like other editor-specific directories..github/workflows/ci.yml (1)
34-35: No additional feedback..github/workflows/automation.yml (3)
16-55: No additional feedback.
60-84: No additional feedback.
90-109: No additional feedback.cortex/hardware_detection.py (2)
398-444: Nice refactor: centralized lspci GPU parsing.
Pulling GPU parsing into a helper improves readability and makes vendor/detail extraction easier to extend.
320-346: Resilience tweaks look good.
The broadened exception handling in detection paths keeps the detector robust across diverse environments.Also applies to: 386-386, 581-588, 606-606, 626-626, 635-635, 649-649
cortex/gpu_manager.py (1)
273-294: Theactive_gpufield is properly typed (GPUDevice | None = None) and the current logic correctly leaves it asNonewhen no vendor-matching or active device is found. However, the claim about this "leaking into UI/logic that expects a non-null active GPU" is not supported by the codebase. A comprehensive search reveals thatactive_gpuis never dereferenced or accessed anywhere—not inswitch_mode(),get_app_launch_command(),display_status(),display_app_recommendations(), or any test. The test suite explicitly expectsactive_gputo beNonein certain cases, confirming this is intended behavior. The proposed fallback is unnecessary since the field is unused in practice.Likely an incorrect or invalid review comment.
cortex/config_manager.py (3)
77-81: POSIX guard for ownership enforcement looks solid.This avoids runtime errors on platforms without ownership APIs.
331-336: Explicit config type annotation is clear.Nice readability win without changing behavior.
464-466: Missing/invalid version handling as upgrade is reasonable.Keeps package diffs actionable when versions are omitted.
cortex/dashboard.py (4)
74-229: Constants, enums, and dataclasses are cleanly organized.Centralized configuration and typed data containers make the module easy to reason about.
450-545: Ollama model listing and history loading are privacy-conscious.Explicit enable flags and minimal data collection are good safeguards.
Also applies to: 552-613
1289-1452: Dry-run + confirmation flow for installs is well-structured.The staged planning and explicit confirmation before execution is a strong safety UX.
1859-1911: DashboardApp lifecycle handling looks solid.Clear startup messaging, graceful shutdown, and GPU cleanup are handled well.
cortex/cli.py (1)
2830-2848: Dashboard command integration is clean and consistent.Method, parser, and routing are wired together coherently.
Also applies to: 2987-2991, 3593-3595
tests/test_dashboard.py (7)
1-36: LGTM!Good module docstring documenting test coverage areas, and imports are properly organized.
80-117: LGTM!Good coverage of the explicit-intent pattern and privacy protections. The
test_no_cmdline_collectedtest properly validates that command-line arguments are never collected.
180-204: LGTM!Good coverage of the explicit-intent pattern and edge cases. Consider adding a test for
load_history()with mocked history files to verify the file reading logic.
206-274: LGTM!Good smoke tests for UI rendering methods. The tests verify that rendering doesn't crash under various states, which is appropriate for initial coverage.
276-297: LGTM!Good verification that
DashboardAppinitializes all components without triggering auto-collection, aligning with the privacy-first design.
331-363: LGTM!Good validation of constants. Optionally, you could enhance
test_action_map_structureto verify thataction_typeis anActionTypeenum member for stronger type safety.
365-366: LGTM!Standard test runner invocation.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cortex/config_manager.py (1)
845-864: Missing audit logging for sandboxed install path.When
sandbox_executoris available and executes an install (lines 845-861), no audit entry is logged. Per coding guidelines, all operations should be logged to~/.cortex/history.db. Currently, only the fallback path (whensandbox_executor is None) logs audit entries.Proposed fix to add audit logging for sandboxed installs
result = self.sandbox_executor.execute(command) + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=True, + is_direct=False, + escalation_consent=source == self.SOURCE_APT, # APT uses sudo + error=result.stderr if result.failed else None, + ) return result.success except Exception as e: logger.error(f"Sandboxed install failed for {name}: {e}", exc_info=True) + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=True, + is_direct=False, + escalation_consent=source == self.SOURCE_APT, + error=str(e), + ) return FalseBased on learnings, all operations should be logged to
~/.cortex/history.dbfor audit purposes.cortex/hardware_detection.py (1)
473-483: NVIDIA detail mapping breaks on mixed-GPU systems (Line 473-483).
info.gpunow includes non-NVIDIA entries; indexingnvidia-smioutput by position can skip NVIDIA GPUs (e.g., Intel iGPU first). Map details to the NVIDIA-only list instead.🛠️ Proposed fix
- for i, line in enumerate(result.stdout.strip().split("\n")): - parts = [p.strip() for p in line.split(",")] - if len(parts) >= 4 and i < len(info.gpu): - if info.gpu[i].vendor == GPUVendor.NVIDIA: - info.gpu[i].model = parts[0] - info.gpu[i].memory_mb = int(parts[1]) - info.gpu[i].driver_version = parts[2] - info.gpu[i].compute_capability = parts[3] + nvidia_gpus = [gpu for gpu in info.gpu if gpu.vendor == GPUVendor.NVIDIA] + for i, line in enumerate(result.stdout.strip().split("\n")): + parts = [p.strip() for p in line.split(",")] + if len(parts) >= 4 and i < len(nvidia_gpus): + gpu = nvidia_gpus[i] + gpu.model = parts[0] + gpu.memory_mb = int(parts[1]) + gpu.driver_version = parts[2] + gpu.compute_capability = parts[3]
🤖 Fix all issues with AI agents
In `@cortex/config_manager.py`:
- Around line 954-963: The audit is currently recorded before the install
outcome is known in the direct install path; update the logic around
self._log_install_audit in cortex/config_manager.py so it records the attempt
and then records the final result after the subprocess executes (or
alternatively move the existing self._log_install_audit call to after the
subprocess call that performs the install), by: 1) invoking
self._log_install_audit with an "attempt" entry (is_direct=True,
escalation_consent=True, is_dry_run=False, is_sandboxed=False) prior to running
the subprocess, and then 2) calling self._log_install_audit again after the
subprocess completes with the actual outcome (set success/failure via a new flag
or by adjusting parameters), or simply moving the single call to after the
subprocess so it logs the real result; ensure the call sites reference the same
helper method name (_log_install_audit) and capture subprocess return
code/exception to set the final audit status.
- Around line 964-965: The apt install command for SOURCE_APT currently includes
the "-y" flag which bypasses per-operation confirmation; update the logic in
config_manager.py where SOURCE_APT and cmd are constructed so that either (a)
the "-y" flag is removed and the code prompts interactively before running the
sudo command using the same console.input pattern used in cortex/cli.py (lines
similar to the Reclaim ownership prompt), or (b) if keeping non-interactive
mode, explicitly display a clear privileged-operation warning and require
explicit per-operation consent (e.g., a yes/no prompt) unless
CORTEX_ALLOW_DIRECT_INSTALL is set and confirmed interactively; ensure the
change is applied around the code that builds cmd (the
["sudo","apt-get","install", ...] list) and that user choice controls whether
the sudo command is executed.
In `@cortex/dashboard.py`:
- Around line 1022-1046: When starting a new Bench or Doctor run, clear the
cancellation flag so prior calls to _cancel_operation() don't permanently
prevent workers from running; in _start_bench (and likewise in _start_doctor)
call self.stop_event.clear() while holding the state lock (before launching
workers/setting bench_running/doctor_running) so the worker loops will not
immediately exit due to a stale stop_event; place the clear near the top of the
method (e.g., just after acquiring self.state_lock and before setting
self.bench_running/self.doctor_running and enabling monitoring).
- Around line 1292-1311: The _start_installation method currently calls
_enable_monitoring which triggers shell history/process collection without
explicit user consent; remove that implicit call so installation only resets
installation_progress (create new InstallationProgress and set state to
InstallationState.WAITING_INPUT) and does not enable monitoring. Ensure any
logic that should start monitoring is moved behind an explicit opt-in path
(e.g., the Bench/Doctor UI event handler) rather than in _start_installation,
and update references/tests that relied on _start_installation to no longer
assume monitoring is enabled.
- Around line 977-1021: Add audit logging to the dashboard handler entry points
by creating and calling a small helper (e.g., _append_history_entry or
_audit_log) that writes JSON records to ~/.cortex/history.db; for each handler
named _start_installation, _start_bench, _start_doctor, _confirm_installation,
and _cancel_operation, call this helper at the start (and on completion/failure
where appropriate) with the action type pulled from ACTION_MAP (lookup by the
key used to trigger the handler), the target package name (from installation
context or self.input_text), a UTC timestamp, and an outcome status ("started",
"succeeded", "failed" or "cancelled"); ensure the helper handles file
open/append, atomic write (e.g., write+flush+fsync or temp-rename) and
exceptions so failures don't crash the UI, and reference these handler names and
ACTION_MAP when adding the calls.
- Around line 525-552: get_available_models currently performs a synchronous
HTTP GET on every call (and _render_models calls it ~10x/sec), causing excessive
requests and render stalls; instead cache the parsed models list and a
last_fetched timestamp on the Dashboard instance, update that cache from the
existing background monitor/update_models loop (or a new background task) no
more frequently than a TTL (e.g., 5 seconds), and change get_available_models to
return the cached data under self.lock without making network calls; ensure the
monitor/update_models path uses the same parsing/rounding logic (the mapping of
"name" and "size_gb") and updates the cache and timestamp under the lock so
reads from get_available_models remain thread-safe and immediate.
- Around line 1474-1514: The confirmed-install path currently calls
CortexCLI.install (in _execute_confirmed_install) which ultimately runs
untrusted shell commands via _execute_command or run_parallel_install without
Firejail; update the flow to invoke SandboxExecutor for executing generated
commands: either (A) refactor CortexCLI.install to accept an executor parameter
(defaulting to existing behaviour) and pass an instance of SandboxExecutor from
_execute_confirmed_install, or (B) wrap the CLI-generated command list in
_execute_confirmed_install and call SandboxExecutor.run()/execute() instead of
letting _execute_command or run_parallel_install call subprocess.run directly;
ensure you reference and reuse the SandboxExecutor class and preserve the
existing dry_run/execute flags and error handling semantics used in
ConfigManager._install_with_sandbox.
In `@docs/DASHBOARD_IMPLEMENTATION.md`:
- Around line 415-418: Update the documentation line that currently states
"psutil>=5.0.0" to match the actual dependency requirement by changing it to
"psutil>=5.9.0"; ensure the text in DASHBOARD_IMPLEMENTATION.md exactly reflects
the version constraint used in requirements and pyproject (psutil>=5.9.0) so all
references to the psutil requirement are consistent.
♻️ Duplicate comments (1)
tests/test_dashboard.py (1)
57-71: Mock psutil to prevent flaky metrics test (Line 57-71).This test relies on real
psutilvalues; in CI/sandboxed environments,update_metrics()can leave defaults and fail. Mockpsutilfor deterministic assertions.🔧 Proposed fix
+ `@patch`("cortex.dashboard.psutil") - def test_update_metrics_when_enabled(self): + def test_update_metrics_when_enabled(self, mock_psutil): """Metrics should be populated after enabling and updating with actual system values.""" + mock_psutil.cpu_percent.return_value = 25.0 + mock_vm = MagicMock() + mock_vm.percent = 60.0 + mock_vm.used = 8 * (1024**3) + mock_vm.total = 16 * (1024**3) + mock_psutil.virtual_memory.return_value = mock_vm + monitor = SystemMonitor() monitor.enable_monitoring() monitor.update_metrics() metrics = monitor.get_metrics() - # Verify metrics are populated with real system values - self.assertGreaterEqual(metrics.cpu_percent, 0) - self.assertGreaterEqual(metrics.ram_percent, 0) - self.assertGreater(metrics.ram_used_gb, 0) - self.assertGreater(metrics.ram_total_gb, 0) - - # Verify RAM values are reasonable - self.assertLess(metrics.ram_used_gb, metrics.ram_total_gb) + self.assertEqual(metrics.cpu_percent, 25.0) + self.assertEqual(metrics.ram_percent, 60.0) + self.assertEqual(metrics.ram_used_gb, 8.0) + self.assertEqual(metrics.ram_total_gb, 16.0)
🧹 Nitpick comments (2)
cortex/health_score.py (1)
496-529: Use explicit column list (and optional busy timeout) for SQLite insert robustness.
This avoids future schema-change breakage and reduces “database is locked” failures under concurrent runs.♻️ Proposed update
- with sqlite3.connect(str(audit_db_path)) as conn: + with sqlite3.connect(str(audit_db_path)) as conn: + conn.execute("PRAGMA busy_timeout = 5000") cursor = conn.cursor() @@ - cursor.execute( - """ - INSERT INTO health_checks VALUES (?, ?, ?) - """, + cursor.execute( + """ + INSERT INTO health_checks (timestamp, overall_score, factors) + VALUES (?, ?, ?) + """, ( entry["timestamp"], entry["overall_score"], json.dumps(entry["factors"]), ), )cortex/config_manager.py (1)
901-916: Consider caching table creation to improve performance.The
CREATE TABLE IF NOT EXISTSstatement is executed on every audit log call. While SQLite handles this efficiently, it adds overhead on every install operation. Consider creating the table once during__init__or using a class-level flag to track initialization.Optional optimization
class ConfigManager: # ... existing code ... + _audit_table_initialized = False def _log_install_audit( self, # ... params ... ) -> None: # ... docstring ... try: import sqlite3 from datetime import datetime audit_db_path = Path.home() / ".cortex" / "history.db" audit_db_path.parent.mkdir(parents=True, exist_ok=True) with sqlite3.connect(str(audit_db_path)) as conn: cursor = conn.cursor() - # Create audit table if it doesn't exist - cursor.execute( - """ - CREATE TABLE IF NOT EXISTS install_audit ( - ... - ) - """ - ) + # Create audit table if it doesn't exist (once per process) + if not ConfigManager._audit_table_initialized: + cursor.execute( + """ + CREATE TABLE IF NOT EXISTS install_audit ( + ... + ) + """ + ) + ConfigManager._audit_table_initialized = True
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
cortex/config_manager.pycortex/dashboard.pycortex/hardware_detection.pycortex/health_score.pydocs/DASHBOARD_IMPLEMENTATION.mdpyproject.tomlrequirements.txttests/test_dashboard.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
tests/test_dashboard.pycortex/hardware_detection.pycortex/health_score.pycortex/config_manager.pycortex/dashboard.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/test_dashboard.py
🧠 Learnings (3)
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Log all operations to ~/.cortex/history.db for audit purposes
Applied to files:
cortex/health_score.pycortex/config_manager.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Use dry-run mode by default for all installation operations
Applied to files:
cortex/config_manager.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Add documentation for all new features
Applied to files:
docs/DASHBOARD_IMPLEMENTATION.md
🧬 Code graph analysis (4)
tests/test_dashboard.py (1)
cortex/dashboard.py (29)
CommandHistory(560-620)DashboardApp(1719-1770)DashboardTab(136-140)InstallationProgress(202-228)InstallationState(143-152)ModelLister(458-552)ProcessLister(369-450)SystemMetrics(185-198)SystemMonitor(248-361)UIRenderer(628-1711)enable_monitoring(281-283)get_processes(447-450)enable(408-410)enable(474-476)update_processes(412-445)get_models(520-523)check_ollama(478-488)update_models(490-518)_render_header(687-701)_render_resources(703-721)_render_processes(723-739)_render_models(741-769)_render_history(771-783)_render_actions(785-800)_render_footer(946-951)_render_screen(953-962)_render_progress_tab(935-944)update_elapsed(217-228)main(1773-1781)
cortex/health_score.py (1)
cortex/logging_system.py (2)
debug(196-198)warning(204-206)
cortex/config_manager.py (2)
cortex/logging_system.py (3)
debug(196-198)warning(204-206)error(208-210)cortex/sandbox/sandbox_executor.py (1)
execute(501-633)
cortex/dashboard.py (2)
cortex/progress_indicators.py (1)
total_steps(99-100)cortex/progress_tracker.py (1)
elapsed_time(78-83)
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~760-~760: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
14-14: Link fragments should be valid
(MD051, link-fragments)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Test (Python 3.11)
- GitHub Check: Test (Python 3.12)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (11)
cortex/health_score.py (3)
9-23: Module-level logger setup looks good.
173-176: Nice defensive logging around parsing failures.Also applies to: 213-219, 305-306, 386-399, 411-418
490-495: Good to surface history write failures via logger.cortex/config_manager.py (8)
9-21: LGTM!Standard Python logging setup using
__name__for the logger name, which correctly namespaces log messages to the module.
277-279: LGTM!Appropriate use of debug-level logging with stack trace for OS detection failures, with a safe fallback.
293-294: LGTM!Warning-level logging is appropriate for preference loading failures since it affects user experience but isn't critical.
336-340: LGTM!Good addition of type annotation for the
configvariable, improving code clarity and IDE support.
420-422: LGTM!Debug-level logging for version parsing failures with lenient fallback behavior is appropriate.
468-482: LGTM!Good defensive handling for missing or invalid versions by treating them as upgrades, with appropriate debug logging for comparison failures.
560-563: LGTM!Appropriate debug-level logging when falling back to simple version comparison.
937-939: LGTM!Exception handling now logs a warning with stack trace, addressing the previous review feedback about silent exception handling.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cortex/config_manager.py (2)
845-861: Add audit logging for sandboxed installs.Sandboxed installs that succeed or fail via
sandbox_executor.executeare not written to history.db, so those operations are missing from the audit trail. Log an attempt and outcome around the execute call. As per coding guidelines, all operations should be audited in ~/.cortex/history.db.🧾 Suggested audit hooks around sandbox execution
- result = self.sandbox_executor.execute(command) - return result.success + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=True, + is_direct=False, + escalation_consent=False, + ) + result = self.sandbox_executor.execute(command) + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=True, + is_direct=False, + escalation_consent=False, + error=None if result.success else "Sandboxed install failed", + ) + return result.success
954-1035: Audit entry records escalation consent before it is known.The initial audit log sets
escalation_consent=Truebefore any user decision and even for non-sudo sources, which can make the audit trail inaccurate. Compute consent first (or move the attempt log after the prompt) and pass the actual value into the attempt and outcome logs. As per coding guidelines, audit records should accurately reflect privileged-operation consent.✅ Example fix (derive consent before logging)
- # Log initial audit entry for direct install attempt - self._log_install_audit( - package_name=name, - version=version, - source=source, - is_dry_run=False, - is_sandboxed=False, - is_direct=True, - escalation_consent=True, - ) + escalation_consent = source != self.SOURCE_APT ... if source == self.SOURCE_APT: # Remove -y flag to require explicit confirmation from cortex.branding import console @@ allow_direct = os.environ.get("CORTEX_ALLOW_DIRECT_INSTALL", "").lower() == "true" if not allow_direct: response = console.input("\nProceed with installation? [y/N]: ") - if response.lower() != "y": + escalation_consent = response.lower() == "y" + if not escalation_consent: self._log_install_audit( package_name=name, version=version, source=source, is_dry_run=False, is_sandboxed=False, is_direct=True, escalation_consent=False, error="User declined consent", ) return False + + # Log initial audit entry after consent is known + self._log_install_audit( + package_name=name, + version=version, + source=source, + is_dry_run=False, + is_sandboxed=False, + is_direct=True, + escalation_consent=escalation_consent, + )cortex/cli.py (1)
855-894:json_outputmust be clean JSON and close history entries.
Even withjson_output=True, the function still emits status/spinner output to stdout before printing JSON, which breaks the dashboard parser. Also, returning early skips updatingInstallationHistory, leaving a pending record. Please gate human-readable output behindif not json_output(or send it to stderr) and mark the history entry as SUCCESS before returning JSON.💡 Example adjustment
- self._print_status("🧠", "Understanding request...") + if not json_output: + self._print_status("🧠", "Understanding request...") @@ - for _ in range(10): - self._animate_spinner("Analyzing system requirements...") - self._clear_line() + if not json_output: + for _ in range(10): + self._animate_spinner("Analyzing system requirements...") + self._clear_line() @@ - if json_output: + if json_output: + if install_id: + history.update_installation(install_id, InstallationStatus.SUCCESS) import json output = { "success": True, "commands": commands, "packages": packages, "install_id": install_id, } print(json.dumps(output, indent=2)) return 0
🤖 Fix all issues with AI agents
In `@cortex/cli.py`:
- Around line 3016-3020: show_rich_help() currently omits the newly added
"dashboard" CLI command, making it undiscoverable; update the show_rich_help()
implementation to add a help row for "dashboard" (the same way other commands
are added) so it appears in the Rich-formatted help table—locate where
show_rich_help() builds the commands/table and add a row entry for "dashboard"
with a short description like "Real-time system monitoring dashboard"
referencing the dashboard_parser command name.
- Around line 2867-2871: Update the install hint printed in the ImportError
handler so it matches the declared dashboard dependency names and versions:
replace the current "pip install psutil pynvml" message in the except
ImportError block (inside the CLI class/method handling dashboard imports in
cortex/cli.py) with a message that suggests "pip install 'psutil>=5.9.0'
'nvidia-ml-py>=12.0.0'" (keep the existing error logging and cx_print calls).
In `@cortex/config_manager.py`:
- Around line 821-844: The current logic in the block that checks
self.sandbox_executor allows falling back to direct installs (using
_install_direct) when CORTEX_ALLOW_DIRECT_INSTALL is true; change this so
privileged installs never run unsandboxed: after logging the audit via
_log_install_audit, do not call _install_direct or honor
CORTEX_ALLOW_DIRECT_INSTALL—either return False immediately (fail fast) or
attempt to instantiate a Firejail-backed executor (e.g., call a new helper like
_create_firejail_executor or ensure_sandbox_executor) and only proceed with
installation if that sandboxed executor is available; remove the direct-install
path that uses _install_direct and ensure the audit still records the refusal.
In `@cortex/dashboard.py`:
- Around line 974-1009: The handler currently treats any '\x1b' as cancel which
makes arrow-key escape sequences like '\x1b[A' be interpreted as Escape; update
_handle_key_press (and the producer _check_keyboard_input if needed) to parse
multi-byte ANSI escape sequences: when seeing '\x1b' peek/read the next byte(s)
and if they form a CSI sequence ('[' followed by 'A','B','C','D') map those to
explicit navigation actions (e.g., up/down/left/right tab/selection) or ignore
them, and only treat a lone '\x1b' (no following '['+code) as cancel; ensure
input_active handling still accepts backspace/newline/printable chars and use
last_pressed_key to reflect arrow names when appropriate.
- Around line 223-234: The method InstallationProgress.update_elapsed currently
lacks a return type annotation; update its signature from def
update_elapsed(self): to def update_elapsed(self) -> None: so it matches other
public methods and coding guidelines, ensuring no change to the method body or
behavior while keeping type consistency across the module.
- Around line 1872-1876: The error messages that print when PSUTIL_AVAILABLE is
false currently recommend installing "psutil>=5.0.0"; update those messages to
match the project's dependency and recommend "psutil>=5.9.0". Locate the two
print calls that use sys.stderr (the block guarded by PSUTIL_AVAILABLE in the
function that starts with the docstring "Run the app and return exit code" and
the second similar block later around lines ~1902–1906), and change the install
suggestion text to "pip install psutil>=5.9.0" so both occurrences and any
identical strings referencing >=5.0.0 are replaced with >=5.9.0.
In `@docs/DASHBOARD_IMPLEMENTATION.md`:
- Around line 10-18: The TOC contains invalid link fragments triggering MD051
because some link targets (e.g., "Installation & Usage", "Implementation
Details", "Component Reference") don't match their actual heading slugs;
regenerate or fix the fragments so each [text](`#fragment`) matches the heading
slug exactly (or adjust the heading text to produce the expected slug), ensuring
MD051 passes—update the TOC entries for Overview, Architecture, Implementation
Details, Testing Strategy, Installation & Usage, Component Reference, and
Troubleshooting to use the correct slugified anchors.
♻️ Duplicate comments (3)
cortex/config_manager.py (2)
845-850: Remove-y/ add explicit consent for sandboxed apt installs.The sandboxed apt command still uses
-y, which bypasses per-operation confirmation. Either remove-yand prompt (or avoid sudo inside the sandbox). As per coding guidelines, privileged operations require explicit confirmation.🛠️ Minimal fix (remove `-y`)
- f"sudo apt-get install -y {name}={version}" + f"sudo apt-get install {name}={version}" if version - else f"sudo apt-get install -y {name}" + else f"sudo apt-get install {name}"
975-979: Do not letCORTEX_ALLOW_DIRECT_INSTALLskip per-operation confirmation.When the env var is true, the prompt is skipped entirely. The guidelines require explicit user confirmation for each privileged operation; consider still prompting (or gating with a CLI flag per invocation) before sudo. As per coding guidelines, privileged operations require explicit confirmation.
cortex/dashboard.py (1)
500-549: Available-models cache never refreshes.
get_available_models()only returns cached data, but_models_cache/_models_last_fetchedare never updated. After ~5s it will always return[], so the “Available” section never populates. Refresh the cache in a background update path (e.g., inupdate_models()or a dedicatedupdate_available_models()), and includesize_gbto match the renderer.💡 One possible fix (refresh cache in update loop)
def update_models(self) -> None: """Update list of loaded models from Ollama.""" if not self._enabled or not REQUESTS_AVAILABLE: return @@ with self.lock: self.models = models[:MAX_MODELS_DISPLAYED] self.ollama_available = True + # Refresh available-models cache at a controlled interval + if time.time() - self._models_last_fetched >= 5.0: + tags_resp = requests.get(f"{OLLAMA_API_BASE}/api/tags", timeout=OLLAMA_API_TIMEOUT) + if tags_resp.status_code == 200: + tags_data = tags_resp.json() + available = [] + for m in tags_data.get("models", []): + size_gb = (m.get("size", 0) / BYTES_PER_GB) if m.get("size") else 0 + available.append({"name": m.get("name", "unknown"), "size_gb": round(size_gb, 1)}) + with self.lock: + self._models_cache = available[:MAX_MODELS_DISPLAYED] + self._models_last_fetched = time.time()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/cli.pycortex/config_manager.pycortex/dashboard.pydocs/DASHBOARD_IMPLEMENTATION.mdtests/test_dashboard.py
✅ Files skipped from review due to trivial changes (1)
- tests/test_dashboard.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/config_manager.pycortex/cli.pycortex/dashboard.py
🧠 Learnings (5)
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Log all operations to ~/.cortex/history.db for audit purposes
Applied to files:
cortex/config_manager.pycortex/dashboard.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Use dry-run mode by default for all installation operations
Applied to files:
cortex/config_manager.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Do not use silent sudo - require explicit user confirmation for privilege escalation
Applied to files:
cortex/config_manager.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Implement Firejail sandboxing for execution of untrusted code
Applied to files:
cortex/dashboard.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Add documentation for all new features
Applied to files:
docs/DASHBOARD_IMPLEMENTATION.md
🪛 LanguageTool
docs/DASHBOARD_IMPLEMENTATION.md
[style] ~760-~760: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...g | --- Last Updated: December 8, 2025 Status: ✅ Complete and Tested *...
(MISSING_COMMA_AFTER_YEAR)
🪛 markdownlint-cli2 (0.18.1)
docs/DASHBOARD_IMPLEMENTATION.md
14-14: Link fragments should be valid
(MD051, link-fragments)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (9)
cortex/config_manager.py (9)
8-21: Module logger setup looks good.Centralizing the logger at module scope keeps logging consistent across the file.
80-83: POSIX-only enforcement note is clear.The guard comment makes the platform-specific behavior explicit.
277-279: Debug logging for OS detection failures is helpful.
293-294: Preference-load warning avoids silent failures.
336-336: Typed config dict improves readability.
420-422: Graceful version-parse fallback is well-logged.
468-481: Version-missing handling and fallback logging look fine.
560-563: Fallback version comparison logging is good.
866-940: Audit logging helper is solid and resilient.Creating the audit table on demand and warning on failures is a good balance.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cortex/dashboard.py`:
- Around line 482-557: get_available_models never sees data because
_models_cache and _models_last_fetched are never updated; populate and read the
cache atomically. Modify update_models (and check_ollama flow if needed) to set
self._models_cache = models[:MAX_MODELS_DISPLAYED] and self._models_last_fetched
= time.time() under self.lock when a successful response is parsed (and set
_models_cache = [] / _models_last_fetched = time.time() on failure), and change
get_available_models to return the cached list when the TTL
(self._models_last_fetched) is fresh while holding the same lock so no network
calls are needed.
- Around line 652-655: The _audit_log function is not protected against
concurrent writes and can overwrite/drops entries; wrap its file write/rewrites
with a dedicated lock (e.g., add self.audit_lock = threading.Lock() alongside
state_lock/stop_event and acquire it in _audit_log) so all read-modify-write
sequences are atomic; update every other audit-related method (the other audit
code region referenced by the reviewer) to use self.audit_lock when
appending/rewriting the audit file to ensure thread-safety.
♻️ Duplicate comments (4)
cortex/dashboard.py (4)
231-242: Add explicit return type onupdate_elapsed()
Missing-> Nonebreaks the project’s type‑hinting standard.♻️ Proposed fix
- def update_elapsed(self): + def update_elapsed(self) -> None:As per coding guidelines, public APIs should include type hints.
982-1017: Arrow keys are interpreted as cancel (ESC) — breaks navigation
Arrow presses emit escape sequences (e.g.,\x1b[A) and get treated as cancel, which violates the keyboard‑navigation requirement.🧭 Suggested approach (parse escape sequences)
def _check_keyboard_input(self) -> str | None: @@ else: if select.select([sys.stdin], [], [], 0)[0]: - key = sys.stdin.read(1) - return key + key = sys.stdin.read(1) + if key == "\x1b" and select.select([sys.stdin], [], [], 0)[0]: + # Read the rest of a CSI sequence if present + key += sys.stdin.read(2) + return keydef _handle_key_press(self, key: str) -> None: @@ - elif key == "\x1b": # Escape + elif key == "\x1b": # Escape (bare) self._cancel_operation() return + + # Arrow keys (CSI sequences) + if key in ("\x1b[A", "\x1b[B", "\x1b[C", "\x1b[D"): + self.last_pressed_key = { + "\x1b[A": "Up", + "\x1b[B": "Down", + "\x1b[C": "Right", + "\x1b[D": "Left", + }[key] + returnAlso applies to: 1748-1758
1881-1884: psutil install hint doesn’t match declared dependency
The user‑facing message should match the project’s required psutil version.🛠️ Suggested fix
- print("Please install it with: pip install psutil>=5.0.0", file=sys.stderr) + print("Please install it with: pip install psutil>=5.9.0", file=sys.stderr)- print("Please install it with: pip install psutil>=5.0.0", file=sys.stderr) + print("Please install it with: pip install psutil>=5.9.0", file=sys.stderr)Run this to confirm the required version in dependency files:
#!/bin/bash rg -n "psutil" pyproject.toml requirements.txtAlso applies to: 1910-1913
1094-1110: Log bench/doctor outcomes, not just start/cancel
Audit trails are incomplete without a success/failure entry on completion.🧾 Suggested fix (log outcome on completion)
with self.state_lock: self.doctor_results = bench_results @@ self.installation_progress.current_library = "" self.bench_running = False + + # Log outcome + outcome = "cancelled" if self.stop_event.is_set() else ("succeeded" if all_passed else "failed") + self._audit_log("bench", "System Benchmark", outcome)with self.state_lock: # Only mark completed if not cancelled/failed if self.installation_progress.state != InstallationState.FAILED: @@ self.installation_progress.current_library = "" self.doctor_running = False + + # Log outcome + outcome = "cancelled" if self.stop_event.is_set() else ("succeeded" if all_passed else "failed") + self._audit_log("doctor", "System Doctor", outcome)As per coding guidelines, log all operations to
~/.cortex/history.db.Also applies to: 1236-1252
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/dashboard.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
cortex/dashboard.py
🧠 Learnings (2)
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Log all operations to ~/.cortex/history.db for audit purposes
Applied to files:
cortex/dashboard.py
📚 Learning: 2026-01-12T20:51:13.828Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-12T20:51:13.828Z
Learning: Applies to **/*.py : Implement Firejail sandboxing for execution of untrusted code
Applied to files:
cortex/dashboard.py
🧬 Code graph analysis (1)
cortex/dashboard.py (1)
cortex/sandbox/sandbox_executor.py (1)
SandboxExecutor(99-662)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (1)
cortex/dashboard.py (1)
1566-1667: Nice: confirmed install uses SandboxExecutor + dry‑run planning
Good security posture and separation between planning and execution.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|



Summary
cortex installcommand to install any packages as intended.benchanddoctorcommand.Type of Change
Checklist
Testing
run following command -
cortex dashboardto test as user. or run Pytest to test everything or specify test_dashboard.pyVideo
https://drive.google.com/file/d/1jTAn2CdTlGjoLdYTdKKVpSEvrVD1qwSj/view?usp=sharing
note
This is a good feature but is slow, it can be optimized with less GUI and UX but for now it works
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.