-
-
Notifications
You must be signed in to change notification settings - Fork 50
feat(monitor): real-time system resource monitoring during installations #617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(monitor): real-time system resource monitoring during installations #617
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughIntroduces comprehensive system resource monitoring capabilities to Cortex. Adds a new Changes
Sequence DiagramsequenceDiagram
actor User
participant CLI as cortex/cli.py
participant Monitor as ResourceMonitor
participant UI as LiveMonitorUI/MonitorUI
participant Exporter as exporter.py
participant File as Output File
User->>CLI: cortex monitor --export json --output metrics.json
CLI->>Monitor: sample() / monitor(duration)
activate Monitor
Monitor->>Monitor: collect_metrics()
Monitor->>Monitor: check_alerts()
Monitor->>Monitor: update() + history
deactivate Monitor
Monitor->>UI: get_history()
activate UI
UI->>UI: format_system_health()
UI->>User: Display live metrics
deactivate UI
Monitor->>Monitor: get_peak_usage()
UI->>UI: format_peak_usage()
UI->>User: Display peak usage & recommendations
CLI->>Exporter: export_monitoring_data(monitor, "json")
activate Exporter
Exporter->>Monitor: get_history(), get_peak_usage()
Exporter->>Exporter: export_to_json()
Exporter->>File: Write JSON with metadata
deactivate Exporter
File->>User: metrics.json
sequenceDiagram
actor User
participant CLI as cortex/cli.py
participant Installer as Installer
participant Monitor as ResourceMonitor
participant UI as LiveMonitorUI
participant Exporter as exporter.py
User->>CLI: cortex install package --monitor --export json
CLI->>Monitor: ResourceMonitor(...)
CLI->>UI: LiveMonitorUI(monitor)
UI->>UI: start() spawns thread
CLI->>Installer: install(progress_callback)
activate Installer
loop Installation Progress
Installer->>Monitor: sample()
Monitor->>Monitor: collect_metrics()
Installer->>UI: _render() updates display
Installer->>Installer: progress_callback
UI->>User: Live CPU/Memory/Disk
end
deactivate Installer
UI->>UI: stop()
Monitor->>Monitor: get_peak_usage()
Monitor->>Monitor: get_recommendations()
UI->>User: Display peak usage & tips
CLI->>Exporter: export_monitoring_data(monitor, "json")
Exporter->>Monitor: get_history(), get_peak_usage()
Exporter->>Exporter: Build JSON payload
User->>User: View exported metrics
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
Verified SignersThis check runs automatically. Maintainers can update |
Summary of ChangesHello @ThankaBharathi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Cortex's diagnostic capabilities by adding a real-time system resource monitoring feature. Users can now gain immediate insights into their system's CPU, memory, disk, and network performance, particularly during resource-intensive operations like software installations. This not only helps in identifying potential performance bottlenecks but also provides actionable recommendations and the ability to export detailed usage data for in-depth analysis, ultimately improving the user experience and system stability. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and well-implemented feature for real-time system resource monitoring. The code is well-structured, particularly with the separation of the monitor, UI, and exporter components. The addition of comprehensive unit tests is also a great practice. My feedback focuses on a few areas to further improve code quality: fixing a bug in alert display, refactoring to reduce code duplication, improving exception handling, and enhancing the separation of concerns between data collection and UI rendering. Overall, this is a strong contribution to the project.
cortex/monitor/resource_monitor.py
Outdated
| "current": { | ||
| # Raw numeric values (for calculations) | ||
| "cpu_percent": latest["cpu_percent"], | ||
| "memory_used_gb": latest["memory_used_gb"], | ||
| "memory_total_gb": latest["memory_total_gb"], | ||
| "memory_percent": latest["memory_percent"], | ||
| "disk_used_gb": latest["disk_used_gb"], | ||
| "disk_total_gb": latest["disk_total_gb"], | ||
| "disk_percent": latest["disk_percent"], | ||
| "network_down_mb": latest["network_down_mb"], | ||
| "network_up_mb": latest["network_up_mb"], | ||
| "disk_read_mb": latest["disk_read_mb"], | ||
| "disk_write_mb": latest["disk_write_mb"], | ||
| # Formatted strings (for display) | ||
| "cpu": f"{latest['cpu_percent']:.0f}%", | ||
| "memory": f"{latest['memory_used_gb']:.1f}/{latest['memory_total_gb']:.1f} GB ({latest['memory_percent']:.0f}%)", | ||
| "disk": f"{latest['disk_used_gb']:.0f}/{latest['disk_total_gb']:.0f} GB ({latest['disk_percent']:.0f}%)", | ||
| "network": f"{latest['network_down_mb']:.1f} MB/s ↓ {latest['network_up_mb']:.1f} MB/s ↑", | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current dictionary in the summary does not include the alerts from the latest sample. This causes a bug in cortex/cli.py where the monitor command fails to display alerts when a duration is provided. Please include the alerts in the current dictionary to ensure consumers of this method have all the necessary data.
| "current": { | |
| # Raw numeric values (for calculations) | |
| "cpu_percent": latest["cpu_percent"], | |
| "memory_used_gb": latest["memory_used_gb"], | |
| "memory_total_gb": latest["memory_total_gb"], | |
| "memory_percent": latest["memory_percent"], | |
| "disk_used_gb": latest["disk_used_gb"], | |
| "disk_total_gb": latest["disk_total_gb"], | |
| "disk_percent": latest["disk_percent"], | |
| "network_down_mb": latest["network_down_mb"], | |
| "network_up_mb": latest["network_up_mb"], | |
| "disk_read_mb": latest["disk_read_mb"], | |
| "disk_write_mb": latest["disk_write_mb"], | |
| # Formatted strings (for display) | |
| "cpu": f"{latest['cpu_percent']:.0f}%", | |
| "memory": f"{latest['memory_used_gb']:.1f}/{latest['memory_total_gb']:.1f} GB ({latest['memory_percent']:.0f}%)", | |
| "disk": f"{latest['disk_used_gb']:.0f}/{latest['disk_total_gb']:.0f} GB ({latest['disk_percent']:.0f}%)", | |
| "network": f"{latest['network_down_mb']:.1f} MB/s ↓ {latest['network_up_mb']:.1f} MB/s ↑", | |
| }, | |
| "current": { | |
| # Raw numeric values (for calculations) | |
| "cpu_percent": latest["cpu_percent"], | |
| "memory_used_gb": latest["memory_used_gb"], | |
| "memory_total_gb": latest["memory_total_gb"], | |
| "memory_percent": latest["memory_percent"], | |
| "disk_used_gb": latest["disk_used_gb"], | |
| "disk_total_gb": latest["disk_total_gb"], | |
| "disk_percent": latest["disk_percent"], | |
| "network_down_mb": latest["network_down_mb"], | |
| "network_up_mb": latest["network_up_mb"], | |
| "disk_read_mb": latest["disk_read_mb"], | |
| "disk_write_mb": latest["disk_write_mb"], | |
| "alerts": latest.get("alerts", []), | |
| # Formatted strings (for display) | |
| "cpu": f"{latest['cpu_percent']:.0f}%", | |
| "memory": f"{latest['memory_used_gb']:.1f}/{latest['memory_total_gb']:.1f} GB ({latest['memory_percent']:.0f}%)", | |
| "disk": f"{latest['disk_used_gb']:.0f}/{latest['disk_total_gb']:.0f} GB ({latest['disk_percent']:.0f}%)", | |
| "network": f"{latest['network_down_mb']:.1f} MB/s ↓ {latest['network_up_mb']:.1f} MB/s ↑", | |
| }, |
cortex/cli.py
Outdated
| if getattr(args, "export", None): | ||
| success = self._export_monitor_data( | ||
| monitor=resource_monitor, | ||
| export=args.export, | ||
| output=args.output, | ||
| ) | ||
| if success: | ||
| cx_print("✓ Monitoring data exported", "success") | ||
| else: | ||
| self._print_error("Failed to export monitoring data") | ||
| return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the suggested refactoring of _export_monitor_data, this call site should be updated to handle the new return value (filename or None). This will allow you to provide a more specific success message to the user, including the name of the file that was created.
| if getattr(args, "export", None): | |
| success = self._export_monitor_data( | |
| monitor=resource_monitor, | |
| export=args.export, | |
| output=args.output, | |
| ) | |
| if success: | |
| cx_print("✓ Monitoring data exported", "success") | |
| else: | |
| self._print_error("Failed to export monitoring data") | |
| return 1 | |
| if getattr(args, "export", None): | |
| filename = self._export_monitor_data( | |
| monitor=resource_monitor, | |
| export=args.export, | |
| output=args.output, | |
| ) | |
| if filename: | |
| cx_print(f"✓ Monitoring data exported to {filename}", "success") | |
| else: | |
| self._print_error("Failed to export monitoring data") | |
| return 1 |
| def _export_monitor_data( | ||
| self, | ||
| monitor: ResourceMonitor, | ||
| export: str, | ||
| output: str | None, | ||
| software: str | None = None, | ||
| ) -> bool: | ||
| """Export monitoring data safely.""" | ||
| from cortex.monitor import export_monitoring_data | ||
|
|
||
| if output: | ||
| filename = f"{output}.{export}" | ||
| else: | ||
| safe_name = "".join(c if c.isalnum() else "_" for c in (software or "monitor")) | ||
| filename = f"{safe_name}_monitoring.{export}" | ||
|
|
||
| return export_monitoring_data(monitor, export, filename) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve reusability and eliminate code duplication (as seen in the install method), this helper function can be refactored to return the generated filename on successful export and None on failure. This allows the caller to know the exact filename used for the export and provide a more informative success message.
| def _export_monitor_data( | |
| self, | |
| monitor: ResourceMonitor, | |
| export: str, | |
| output: str | None, | |
| software: str | None = None, | |
| ) -> bool: | |
| """Export monitoring data safely.""" | |
| from cortex.monitor import export_monitoring_data | |
| if output: | |
| filename = f"{output}.{export}" | |
| else: | |
| safe_name = "".join(c if c.isalnum() else "_" for c in (software or "monitor")) | |
| filename = f"{safe_name}_monitoring.{export}" | |
| return export_monitoring_data(monitor, export, filename) | |
| def _export_monitor_data( | |
| self, | |
| monitor: ResourceMonitor, | |
| export: str, | |
| output: str | None, | |
| software: str | None = None, | |
| ) -> str | None: | |
| """Export monitoring data safely.""" | |
| from cortex.monitor import export_monitoring_data | |
| if output: | |
| filename = f"{output}.{export}" | |
| else: | |
| safe_name = "".join(c if c.isalnum() else "_" for c in (software or "monitor")) | |
| filename = f"{safe_name}_monitoring.{export}" | |
| if export_monitoring_data(monitor, export, filename): | |
| return filename | |
| return None |
| # Export if requested | ||
| if export: | ||
| from cortex.monitor import export_monitoring_data | ||
|
|
||
| if output: | ||
| filename = f"{output}.{export}" | ||
| else: | ||
| # Sanitize software name for filename | ||
| safe_name = "".join(c if c.isalnum() else "_" for c in software) | ||
| filename = f"{safe_name}_monitoring.{export}" | ||
| if export_monitoring_data(resource_monitor, export, filename): | ||
| cx_print(f"✓ Monitoring data exported to {filename}", "success") | ||
| else: | ||
| self._print_error("Failed to export monitoring data") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code for exporting monitoring data is very similar to the logic in the _export_monitor_data helper and the monitor command handler. This duplication can be avoided by using the refactored _export_monitor_data helper here.
# Export if requested
if export:
filename = self._export_monitor_data(
monitor=resource_monitor,
export=export,
output=output,
software=software,
)
if filename:
cx_print(f"✓ Monitoring data exported to {filename}", "success")
else:
self._print_error("Failed to export monitoring data")| except Exception as exc: | ||
| logger.error("Export failed: %s", exc) | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching a broad Exception can hide unexpected errors and make debugging more difficult. It's better to catch more specific exceptions that you expect to handle, such as OSError for file issues or ValueError for data problems. This allows other unexpected exceptions to propagate and be addressed. This advice also applies to the simplified export_json and export_csv functions below, which also use a broad except Exception.
| except Exception as exc: | |
| logger.error("Export failed: %s", exc) | |
| return False | |
| except (OSError, ValueError, AttributeError) as exc: | |
| logger.error("Export failed: %s", exc) | |
| return False |
cortex/monitor/live_monitor_ui.py
Outdated
| Disk: 120/500 GB (24%) | ||
| Network: 2.5 MB/s ↓ 0.8 MB/s ↑ | ||
| """ | ||
| cpu_cores = psutil.cpu_count(logical=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The format_system_health method, which is part of the UI layer, is making a system call to psutil.cpu_count(). To maintain a good separation of concerns, all data collection should happen in the ResourceMonitor, and the UI components should only be responsible for displaying the data they receive.
I recommend the following changes:
- In
cortex/monitor/resource_monitor.py, inside thecollect_metricsmethod, add'cpu_cores': psutil.cpu_count(logical=True)to the returned dictionary. - Here, change this line to get the core count from the
metricsdictionary.
| cpu_cores = psutil.cpu_count(logical=True) | |
| cpu_cores = metrics.get("cpu_cores", "?") |
cortex/monitor/resource_monitor.py
Outdated
| def get_formatted_summary(self) -> dict[str, Any]: | ||
| """ | ||
| Get a formatted summary for display purposes. | ||
| This should be moved to UI layer eventually. | ||
| """ | ||
| summary = self.get_summary() | ||
| if not summary: | ||
| return {} | ||
|
|
||
| return { | ||
| "current": { | ||
| "cpu": summary["current"]["cpu"], | ||
| "memory": summary["current"]["memory"], | ||
| "disk": summary["current"]["disk"], | ||
| "network": summary["current"]["network"], | ||
| }, | ||
| "peak": summary["peak"], | ||
| "samples": summary["samples"], | ||
| "thresholds": summary["thresholds"], | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method get_formatted_summary is noted as something that should be moved to the UI layer, and its functionality overlaps with the formatted strings already being generated inside get_summary. This mixing of data collection and formatting within the ResourceMonitor class reduces the separation of concerns.
I recommend the following for better maintainability:
- Remove the
get_formatted_summarymethod. - Refactor
get_summaryto only return raw data. All string formatting should be handled by the UI layer (e.g.,MonitorUIclass).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pyproject.toml (1)
103-103: Addcortex.monitorto the packages list intool.setuptools.packages.The
cortex/monitor/directory with__init__.pyexists but is missing from the packages configuration at line 103. Add it to the list to ensure it's included in distributions:packages = ["cortex", "cortex.sandbox", "cortex.utils", "cortex.llm", "cortex.kernel_features", "cortex.kernel_features.ebpf", "cortex.monitor"]
🤖 Fix all issues with AI agents
In `@cortex/cli.py`:
- Around line 64-116: The assignment to the unused variable latest in the
monitor method should be removed; inside cortex.cli.monitor() you currently do
latest = resource_monitor.history[-1] but never use latest — delete that
assignment so the alerts logic continues to use metrics =
resource_monitor.sample() / summary["current"] and alerts =
metrics.get("alerts", []) as-is, leaving ResourceMonitor, monitor, get_summary,
history, and alerts handling unchanged.
- Around line 900-916: Update the function signature to use proper
union/optional type hints for parameters (e.g., change export: str = None and
output: str = None to export: str | None = None and output: str | None = None or
use Optional[str]) so the hints are valid; then replace the incorrect Rich call
console.print("📊 Monitoring system resources during installation...", "info")
with either cx_print("📊 Monitoring system resources during installation...",
"info") or console.print("📊 Monitoring system resources during
installation...", style="info") to pass style correctly (references: parameters
monitor/export/output in the function signature and the console.print / cx_print
and ResourceMonitor usage).
In `@cortex/monitor/live_monitor_ui.py`:
- Around line 132-136: The _render() method currently reads
self.monitor.history[-1] directly which can race with ResourceMonitor.sample()
mutating history; replace the direct access with a thread-safe API such as
calling self.monitor.get_history(limit=1) and using the returned copy (or
acquire the monitor's lock if available) to obtain the latest sample, and retain
the early-return when empty; update references in _render() from
self.monitor.history to the safe copy so no direct indexing into the shared list
occurs.
In `@tests/monitor/test_resource_monitor.py`:
- Around line 315-316: Remove the development comment line "# ADD THESE NEW
TESTS TO COVER MISSING METHODS:" from the test file; simply delete that
standalone comment (and any leftover blank line/whitespace) so the test file
contains only actual tests and no dev notes before merging.
🧹 Nitpick comments (13)
cortex/monitor/exporter.py (1)
141-146: Add type hint formonitorparameter.The
monitorparameter lacks a type hint. Per coding guidelines, include type hints in Python code. Consider usingResourceMonitordirectly or aProtocolfor duck typing.Suggested fix
+from cortex.monitor.resource_monitor import ResourceMonitor + + def export_monitoring_data( - monitor, + monitor: ResourceMonitor, format_type: str, output_file: str, include_recommendations: bool = True, ) -> bool:Or if you prefer duck typing to avoid circular imports:
from typing import Protocol class MonitorProtocol(Protocol): def get_history(self) -> list[dict[str, Any]]: ... def get_peak_usage(self) -> dict[str, float]: ... def get_recommendations(self) -> list[str]: ...cortex/monitor/live_monitor_ui.py (2)
18-24: Consider removing duplicatebar()function.The standalone
bar()function duplicatesMonitorUI.create_progress_bar(). Consider using only the class method to avoid redundancy.
178-189: Guard against callingstart()multiple times.If
start()is called while the UI is already running, it will spawn additional threads. Consider checking if_threadis alive before starting a new one.Suggested guard
def start(self) -> None: """Start the monitoring UI.""" + if self._thread and self._thread.is_alive(): + return # Already running self._stop_event.clear()tests/test_cli_extended.py (1)
300-346: Consider adding tests for explicit monitoring flags.The tests verify default parameter values. Consider adding tests that explicitly set
--monitor,--export json, etc., to ensure the CLI properly parses and passes these flags.cortex/monitor/__init__.py (1)
1-16: Consider adding a module-level docstring.As per coding guidelines requiring docstrings for public APIs, this package init would benefit from a brief docstring describing the monitoring module's purpose and exported components.
📝 Suggested improvement
+""" +Real-time system resource monitoring module. + +Exports: + ResourceMonitor: Collects and tracks system resource usage. + MonitorUI: Static formatting methods for metrics display. + LiveMonitorUI: Live-rendered UI for installation monitoring. + export_to_json: Export monitoring data to JSON format. + export_to_csv: Export monitoring history to CSV format. + export_monitoring_data: Convenience function for data export. +""" from .exporter import ( export_monitoring_data, export_to_csv, export_to_json, )tests/test_cli.py (1)
215-255: Tests correctly verify new install parameters, but coverage for monitoring flags could be expanded.The updated assertions properly verify the extended
install()signature withmonitor,export, andoutputparameters. However, the existing tests only verify default values (monitor=False,export=None). Consider adding test cases for:
cortex install docker --monitor(monitor=True)cortex install docker --export json --output custom_fileThis would ensure the CLI argument parsing works correctly for the new monitoring features.
tests/monitor/test_exporter.py (1)
289-301: Minor: Moveimport osto module level.The
osmodule is imported inside the test methods (lines 290 and 311). For consistency and clarity, consider importing it at the module level alongsidepathlib.📝 Suggested change at module level
import csv import json +import os import tempfile from pathlib import PathThen remove the inline imports at lines 290 and 311.
tests/monitor/test_live_monitor_ui.py (1)
322-347: Consider adding explicit assertions for edge case tests.The tests
test_start_already_runningandtest_stop_no_threadonly verify that no exception is raised, with comments like "No assertion needed" and "Should not crash". Consider making these tests more explicit:📝 Suggested improvement
def test_start_already_running(self): """Test starting when already running.""" mock_monitor = MagicMock() mock_monitor.history = [] ui = LiveMonitorUI(mock_monitor) # Create a mock thread that appears alive mock_thread = MagicMock() mock_thread.is_alive.return_value = True ui._thread = mock_thread # Should not start new thread ui.start() - # No assertion needed - just checking it doesn't crash + # Verify the existing thread was not replaced + assert ui._thread is mock_thread + # Verify start was not called again on the mock thread + mock_thread.start.assert_not_called() def test_stop_no_thread(self): """Test stop when no thread exists.""" mock_monitor = MagicMock() mock_monitor.history = [] ui = LiveMonitorUI(mock_monitor) ui._thread = None - # Should not crash ui.stop() + # Verify thread remains None after stop + assert ui._thread is Nonecortex/cli.py (2)
11-11: Remove unusedpsutilimport.The
psutilmodule is imported here but not directly used incli.py. TheResourceMonitorclass handles all psutil interactions internally. This import can be removed to keep the import list clean.Suggested fix
-import psutil
1137-1150: Duplicate export logic - use the helper method.This export logic duplicates the
_export_monitor_datahelper method defined at lines 123-139. Reuse the helper for consistency and maintainability.Suggested fix
# Export if requested if export: - from cortex.monitor import export_monitoring_data - - if output: - filename = f"{output}.{export}" - else: - # Sanitize software name for filename - safe_name = "".join(c if c.isalnum() else "_" for c in software) - filename = f"{safe_name}_monitoring.{export}" - if export_monitoring_data(resource_monitor, export, filename): - cx_print(f"✓ Monitoring data exported to {filename}", "success") - else: + success = self._export_monitor_data( + monitor=resource_monitor, + export=export, + output=output, + software=software, + ) + if success: + cx_print("✓ Monitoring data exported", "success") + else: self._print_error("Failed to export monitoring data")cortex/monitor/resource_monitor.py (3)
162-172: Consider usingcollections.dequefor O(1) history management.
self.history.pop(0)on line 166 is O(n) for a Python list because all subsequent elements must be shifted. Withmax_history_size=1000, this is likely acceptable, but usingcollections.deque(maxlen=max_history_size)would provide O(1) operations and automatically enforce the size limit.Suggested approach
from collections import deque # In __init__: self.history: deque[dict[str, Any]] = deque(maxlen=max_history_size) # In update(), simply append - deque handles eviction: def update(self, metrics: dict[str, Any]) -> None: """Store metrics and update peak usage.""" self.history.append(metrics) for key in self.peak_usage: if key in metrics: self.peak_usage[key] = max(self.peak_usage[key], metrics[key])Note: This would require updating
get_history()to convert to list and adjusting type hints.
326-369: Network threshold is hardcoded.The 50 MB/s network threshold on line 358 is hardcoded, unlike CPU/memory/disk thresholds which are configurable via constructor parameters. Consider adding a
network_thresholdparameter for consistency, or documenting this as an intentional design choice.
67-131: First CPU reading may be 0% on initialcollect_metrics()call.When
cpu_percent(interval=None)is called for the first time, psutil returns 0.0 because it has no prior measurement to compare against. This is documented behavior. Subsequent calls will return meaningful values.If accurate initial CPU readings are required, call
psutil.cpu_percent()once during__init__to prime the counter, then use non-blocking calls incollect_metrics().
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
cortex/cli.pycortex/monitor/__init__.pycortex/monitor/exporter.pycortex/monitor/live_monitor_ui.pycortex/monitor/resource_monitor.pypyproject.tomltests/monitor/test_exporter.pytests/monitor/test_live_monitor_ui.pytests/monitor/test_resource_monitor.pytests/test_cli.pytests/test_cli_extended.pytests/test_licensing.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
tests/monitor/test_exporter.pycortex/monitor/__init__.pytests/test_cli.pycortex/monitor/exporter.pytests/test_licensing.pycortex/monitor/live_monitor_ui.pytests/monitor/test_live_monitor_ui.pytests/test_cli_extended.pycortex/monitor/resource_monitor.pycortex/cli.pytests/monitor/test_resource_monitor.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/monitor/test_exporter.pytests/test_cli.pytests/test_licensing.pytests/monitor/test_live_monitor_ui.pytests/test_cli_extended.pytests/monitor/test_resource_monitor.py
🧬 Code graph analysis (8)
tests/monitor/test_exporter.py (2)
cortex/monitor/resource_monitor.py (5)
monitor(182-208)ResourceMonitor(22-403)get_history(279-291)get_peak_usage(275-277)get_recommendations(326-369)cortex/monitor/exporter.py (5)
export_monitoring_data(141-181)export_to_csv(74-138)export_to_json(18-71)export_json(185-207)export_csv(210-228)
cortex/monitor/__init__.py (3)
cortex/monitor/exporter.py (3)
export_monitoring_data(141-181)export_to_csv(74-138)export_to_json(18-71)cortex/monitor/live_monitor_ui.py (2)
LiveMonitorUI(120-195)MonitorUI(26-117)cortex/monitor/resource_monitor.py (1)
ResourceMonitor(22-403)
tests/test_cli.py (1)
cortex/cli.py (2)
monitor(64-116)main(3038-3879)
cortex/monitor/exporter.py (2)
cortex/monitor/resource_monitor.py (6)
sample(174-180)update(162-172)monitor(182-208)get_history(279-291)get_peak_usage(275-277)get_recommendations(326-369)tests/monitor/test_resource_monitor.py (1)
monitor(11-12)
cortex/monitor/live_monitor_ui.py (2)
cortex/cli.py (3)
monitor(64-116)history(1610-1678)update(1718-1960)cortex/monitor/resource_monitor.py (3)
monitor(182-208)ResourceMonitor(22-403)update(162-172)
tests/test_cli_extended.py (2)
cortex/cli.py (1)
monitor(64-116)cortex/monitor/resource_monitor.py (1)
monitor(182-208)
cortex/monitor/resource_monitor.py (2)
cortex/cli.py (2)
history(1610-1678)monitor(64-116)tests/monitor/test_resource_monitor.py (1)
monitor(11-12)
cortex/cli.py (3)
cortex/monitor/resource_monitor.py (5)
monitor(182-208)ResourceMonitor(22-403)get_summary(210-252)sample(174-180)get_recommendations(326-369)cortex/monitor/live_monitor_ui.py (4)
MonitorUI(26-117)format_system_health(56-77)format_installation_metrics(80-97)format_peak_usage(100-108)cortex/monitor/exporter.py (1)
export_monitoring_data(141-181)
🔇 Additional comments (34)
pyproject.toml (1)
60-61: Dependency addition looks good.The
psutil>=5.9.0dependency is appropriate for the system resource monitoring feature.tests/test_licensing.py (1)
1-431: Formatting and import changes look fine.The changes are purely cosmetic (import reordering, whitespace adjustments) with no functional impact on the test behavior.
cortex/monitor/exporter.py (4)
1-16: Module setup looks good.Imports and logger configuration are appropriate.
74-138: CSV export implementation is solid.Good handling of:
- Empty history (creates header-only file)
- Dynamic fieldnames from all samples
- List serialization to semicolon-separated strings
- Proper error logging and re-raising
184-228: Simplified wrappers are appropriate.The boolean-returning wrappers provide a clean API. Since the underlying functions (
export_to_json,export_to_csv) already log errors before re-raising, the exception suppression here is acceptable.
39-41: No action needed—code handles directory creation correctly.The code properly handles all filename cases. When
output_fileis a bare filename like"data.json",os.path.dirname(os.path.abspath("data.json"))returns the current working directory (not an empty string), andos.makedirs()withexist_ok=Truesafely succeeds as a no-op on existing directories. The same pattern in bothexport_to_jsonandexport_to_csvworks correctly without additional guards or documentation.Likely an incorrect or invalid review comment.
cortex/monitor/live_monitor_ui.py (2)
191-195: Stop method implementation is appropriate.The 1-second timeout on
join()prevents hanging, and theif self._threadguard handles the case wherestop()is called beforestart().
99-108: No action required. Thememory_used_gbkey exists inpeak_usage(initialized at line 54 of resource_monitor.py) and is correctly accessed byformat_peak_usage. The method will display the actual peak memory usage, not always0.0 GB.Likely an incorrect or invalid review comment.
tests/test_cli_extended.py (1)
306-346: Test updates correctly verify new monitoring parameters.The tests properly validate that the new
monitor,export, andoutputparameters are passed through to theinstallmethod with expected defaults.tests/monitor/test_exporter.py (4)
1-11: LGTM!Clean import structure with appropriate test dependencies. The exporter functions are correctly imported for testing.
16-106: LGTM!Thorough test coverage for basic JSON and CSV export functionality, including the empty history edge case.
351-365: Good edge case coverage for invalid paths.The test verifies behavior with
Noneand empty string paths, ensuring the function returnsFalsegracefully. This is important for robustness.
385-439: LGTM!The simplified API tests (
export_json,export_csv) properly verify both success and failure scenarios using monkeypatching to simulate exceptions.tests/monitor/test_live_monitor_ui.py (5)
1-14: LGTM!Appropriate imports for testing the live monitor UI components, including threading primitives and Rich library components.
16-45: LGTM!Excellent test coverage for the
bar()function including edge cases for negative values, values over 100%, and precise percentage rounding.
60-86: LGTM!The
format_system_healthtest properly mockspsutil.cpu_countand validates the expected output format including CPU cores, RAM, disk, and network metrics.
350-381: LGTM!The thread safety test properly verifies that multiple start/stop cycles work correctly by mocking the Thread class and verifying call counts.
407-472: LGTM!Good integration tests that exercise the UI with a real
ResourceMonitorinstance, verifying the components work together correctly.tests/monitor/test_resource_monitor.py (7)
1-12: LGTM!Clean imports and well-structured pytest fixture for the ResourceMonitor instance.
15-66: LGTM!Thorough testing of initial state and basic metrics collection with properly mocked psutil functions. The first sample correctly verifies zero rates.
68-113: LGTM!Excellent test for rate calculations. The test correctly sets up previous values and verifies that a 1 MB byte difference over a 1-second interval produces 1 MB/s rates.
115-157: LGTM!Comprehensive test for update behavior, verifying both peak tracking (highest values retained) and history storage (order preserved).
286-296: LGTM!The time mock sequence correctly simulates the monitoring loop with duration=2.0. The test verifies that sampling stops after the duration is exceeded.
347-378: LGTM!Good test coverage for
get_recent_alertsandget_recommendations. The alerts test properly verifies filtering of samples with non-empty alert lists, and the recommendations test verifies that high resource usage triggers appropriate recommendations.
396-422: LGTM!The
check_alertstest properly verifies threshold-based alert generation for CPU, memory, and disk metrics, covering both below-threshold and above-threshold scenarios.cortex/cli.py (4)
904-910: Verify: Auto-enabling execute overrides dry-run default.When
--monitoris passed without--execute, the code auto-enables execution (line 910). Per coding guidelines, "Use dry-run mode by default for all installation operations." This implicit override of the safety default should be clearly communicated to users.The current warning message is good, but consider whether this behavior aligns with the project's safety-first philosophy. Users might expect
--monitor --dry-runto preview monitoring without execution.
3114-3136: LGTM - Monitor command parser is well-structured.The monitor subcommand parser is cleanly defined with appropriate options for
--export,--output, and--duration. The help text is descriptive.
3240-3254: LGTM - Install parser monitoring options.The new
--monitor,--export, and--outputarguments are correctly added to the install subparser with clear help text indicating the dependency between--exportand--monitor.
3754-3755: LGTM - Command routing correctly passes monitoring parameters.The monitor command is correctly routed to
cli.monitor(args), and the install invocation properly passes the newmonitor,export, andoutputarguments.Also applies to: 3786-3788
cortex/monitor/resource_monitor.py (5)
1-19: LGTM - Module setup and constants.Clean module structure with appropriate docstring, imports, and sensible default threshold constants. The logger setup follows standard practices.
22-66: LGTM - Well-documented constructor.The
__init__method has comprehensive docstrings, proper type hints, and sensible defaults. UsingAnyfor the private psutil counter types is a reasonable approach to avoid coupling to psutil's internal types.
174-208: LGTM - Sampling and monitoring loop.The
sample()method cleanly composes metrics collection, alert checking, and history updates. Themonitor()loop handles keyboard interrupts and exceptions appropriately with proper logging.
210-252: LGTM - Well-structured summary with raw and formatted data.The
get_summary()method provides both raw numeric values for calculations and pre-formatted strings for display. This separation of concerns is good design, making it easy for consumers to use whichever representation they need.
371-403: LGTM - Comprehensive statistics method.The
get_stats()method provides useful aggregate statistics. Thesafe_averagehelper function handles empty lists gracefully, preventing division by zero errors.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ThankaBharathi CLA check is failing, kindly complete the verfication.
Also address coderabbtai comments.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@cortex/cli.py`:
- Around line 157-159: _get_latest_metrics currently always returns
monitor.history[-1] once history exists, producing stale readings; change it to
return monitor.history[-1] only if the last sample is still fresh, otherwise
call monitor.sample() to get a new reading. Implement this by checking the
timestamp/age of the last entry (e.g., monitor.history[-1].timestamp or
monitor.last_sample_time) against the configured interval (e.g.,
monitor.sample_interval or monitor.interval) and call monitor.sample() when the
elapsed time >= interval; otherwise return monitor.history[-1]. Update the same
logic wherever _get_latest_metrics behavior is duplicated (lines referenced
1039-1046).
- Around line 945-951: The code currently auto-enables execution when monitor is
set (see variables monitor, execute, dry_run and the resource_monitor
initialization); remove the auto-enable and require explicit --execute instead:
replace the block that sets execute = True with logic that detects monitor and
not execute, prints a clear error/notice about requiring --execute to run
monitoring (preserving dry_run behavior), and then exit non‑zero (e.g.,
SystemExit or exit(1)); leave resource_monitor as None unless execution is
explicitly requested.
- Around line 64-83: Add an audit entry when the monitor command runs by
recording to the history DB (~/.cortex/history.db) inside the monitor method:
after parsing args and before or immediately after instantiating ResourceMonitor
(or right after console.print("System Health:")), call the existing history API
to create a record containing operation="monitor", the provided args/duration, a
timestamp, and the result code (use the export_result or final return value).
Update monitor to call the history logging function (use the project’s history
API/class you already have) and ensure failures to write history are handled
non-fatally (log a warning) so _collect_monitoring_metrics, _display_alerts,
_handle_monitor_export, and _display_recommendations keep running.
In `@cortex/monitor/exporter.py`:
- Around line 22-191: Add audit log entries for every export attempt in
export_to_json and export_to_csv using the project's audit logging mechanism:
call the audit API (via the project's audit logger helper) before returning or
re-raising on error to record format ("json" or "csv"), output path
(output_file), timestamp, and result ("success" or "failure") plus an error
message when exceptions occur; instrument the try/except blocks in
export_to_json and export_to_csv (use the existing logger and exception
handlers) to create and persist the audit record on both success (after
logger.info) and on error (inside each except block) without changing existing
exception flows.
- Around line 193-211: The docstring for export_monitoring_data contradicts the
implementation: the function currently logs errors and returns False instead of
raising AttributeError or ValueError; update the docstring for
export_monitoring_data to remove the "Raises" section and explicitly state that
on invalid inputs (missing monitor methods like get_history/get_peak_usage,
unsupported format_type, or invalid output_file) the function logs an error and
returns False, and that format_type is case-insensitive ('json' or 'csv') and
include_recommendations applies to JSON only; alternatively, if you prefer
raising, change the implementation in export_monitoring_data to re-raise
AttributeError/ValueError where validation fails (referencing
monitor.get_history()/get_peak_usage and format_type/output_file checks) so the
docstring remains accurate.
In `@cortex/monitor/resource_monitor.py`:
- Around line 270-299: The monitor method currently treats 0 or negative
duration as falsy and will run indefinitely; add an explicit guard at the start
of monitor to validate duration: if duration is not None and duration <= 0,
raise a ValueError (or return early) with a clear message; place this check
before computing start_time so the loop (and calls to
self.sample()/time.sleep(self.interval)) are never entered when duration is
non‑positive. Reference symbols: monitor, duration, start_time, self.sample,
self.interval.
- Around line 49-91: The __init__ currently only logs when cpu_threshold,
memory_threshold, or disk_threshold are outside 0–100 despite the docstring
saying they are clamped; modify __init__ to actually clamp each of
cpu_threshold, memory_threshold, and disk_threshold to the 0–100 range (e.g.,
value = max(0, min(100, value))) after validation and update the logger call to
note that the value was clamped instead of only warning, then assign the clamped
values to self.cpu_threshold, self.memory_threshold, and self.disk_threshold so
the rest of ResourceMonitor uses safe percentage values.
🧹 Nitpick comments (5)
tests/monitor/test_live_monitor_ui.py (1)
16-154: Add type hints for test functions and fixtures.The new test functions are untyped. If the project guideline (“include type hints in Python code”) applies to tests, please add return types (e.g.,
-> None) and annotate fixtures. As per coding guidelines, ...tests/monitor/test_exporter.py (1)
20-845: Add type hints for test helpers.This test suite is untyped. If the type-hint guideline applies to tests, please add return types (e.g.,
-> None) and annotate fixtures/helpers for consistency. As per coding guidelines, ...tests/monitor/test_resource_monitor.py (1)
14-328: Add type hints to match project rules.If type hints are required for all Python modules, please annotate this fixture and test functions (e.g.,
def test_x(...) -> None). As per coding guidelines, ...cortex/monitor/live_monitor_ui.py (1)
276-288: Use logger instead of print inside the UI loop.
print()can interfere with Rich Live rendering and makes logs harder to manage. Prefer structured logging.🔧 Suggested change
+import logging @@ +logger = logging.getLogger(__name__) @@ - except Exception as exc: - # Log but continue rendering - print(f"UI rendering error: {exc}") + except Exception as exc: + # Log but continue rendering + logger.warning("UI rendering error: %s", exc)cortex/cli.py (1)
84-94: Remove duplicate_display_recommendationsdefinition.The second definition overrides the first, leaving dead code and risking divergence. Keep one copy.
♻️ Suggested cleanup (remove the duplicate)
- def _display_recommendations(self, resource_monitor: ResourceMonitor) -> None: - """Display performance recommendations.""" - if not resource_monitor.history or len(resource_monitor.history) <= 1: - return - - recommendations = resource_monitor.get_recommendations() - if recommendations: - console.print("\n[bold cyan]⚡ Performance Recommendations:[/bold cyan]") - for rec in recommendations: - console.print(f" • {rec}")Also applies to: 145-155
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cortex/cli.pycortex/monitor/exporter.pycortex/monitor/live_monitor_ui.pycortex/monitor/resource_monitor.pytests/monitor/test_exporter.pytests/monitor/test_live_monitor_ui.pytests/monitor/test_resource_monitor.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide for Python code
Include type hints in Python code
Add docstrings for all public APIs in Python code
Use dry-run mode by default for all installation operations
Do not use silent sudo - require explicit user confirmation for privilege escalation
Implement Firejail sandboxing for execution of untrusted code
Log all operations to ~/.cortex/history.db for audit purposes
Files:
tests/monitor/test_exporter.pycortex/monitor/exporter.pycortex/monitor/live_monitor_ui.pytests/monitor/test_live_monitor_ui.pytests/monitor/test_resource_monitor.pycortex/monitor/resource_monitor.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain test coverage above 80% for pull requests
Files:
tests/monitor/test_exporter.pytests/monitor/test_live_monitor_ui.pytests/monitor/test_resource_monitor.py
🧬 Code graph analysis (6)
tests/monitor/test_exporter.py (3)
cortex/monitor/resource_monitor.py (5)
monitor(270-298)ResourceMonitor(25-446)get_history(335-347)get_peak_usage(396-402)get_recommendations(419-446)cortex/monitor/exporter.py (3)
export_monitoring_data(193-259)export_to_csv(104-190)export_to_json(22-101)cortex/logging_system.py (1)
warning(204-206)
cortex/monitor/exporter.py (1)
cortex/monitor/resource_monitor.py (6)
sample(250-266)update(232-248)monitor(270-298)get_history(335-347)get_peak_usage(396-402)get_recommendations(419-446)
cortex/monitor/live_monitor_ui.py (1)
cortex/monitor/resource_monitor.py (4)
monitor(270-298)ResourceMonitor(25-446)get_history(335-347)update(232-248)
tests/monitor/test_resource_monitor.py (1)
cortex/monitor/resource_monitor.py (12)
monitor(270-298)ResourceMonitor(25-446)collect_metrics(112-203)update(232-248)sample(250-266)get_summary(302-333)get_peak_usage(396-402)get_history(335-347)clear_history(404-415)get_recent_alerts(349-361)get_recommendations(419-446)check_alerts(207-230)
cortex/monitor/resource_monitor.py (2)
cortex/cli.py (1)
monitor(64-82)tests/monitor/test_resource_monitor.py (1)
monitor(15-16)
cortex/cli.py (3)
cortex/monitor/resource_monitor.py (5)
monitor(270-298)ResourceMonitor(25-446)get_recommendations(419-446)get_summary(302-333)sample(250-266)tests/monitor/test_resource_monitor.py (1)
monitor(15-16)cortex/monitor/live_monitor_ui.py (4)
MonitorUI(38-157)format_system_health(77-107)format_installation_metrics(110-131)format_peak_usage(134-146)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Test (Python 3.12)
- GitHub Check: Test (Python 3.10)
- GitHub Check: Test (Python 3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (5)
cortex/monitor/resource_monitor.py (4)
112-203: Metrics collection + error handling look solid.
207-266: Alert generation and history updates look good.
302-416: Summary/accessor APIs look consistent.
419-446: Recommendation logic is clear.cortex/cli.py (1)
95-144: Monitoring/export wiring and CLI flags look consistent.Also applies to: 161-179, 952-956, 1158-1190, 3146-3168, 3272-3287, 3784-3821
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| def monitor(self, args: argparse.Namespace) -> int: | ||
| """Show current system resource usage.""" | ||
| resource_monitor = ResourceMonitor(interval=1.0) | ||
| duration = getattr(args, "duration", None) | ||
|
|
||
| console.print("System Health:") | ||
|
|
||
| metrics = self._collect_monitoring_metrics(resource_monitor, duration) | ||
| if metrics: | ||
| console.print(MonitorUI.format_system_health(metrics)) | ||
|
|
||
| self._display_alerts(metrics) | ||
|
|
||
| export_result = self._handle_monitor_export(resource_monitor, args) | ||
| if export_result != 0: | ||
| return export_result | ||
|
|
||
| self._display_recommendations(resource_monitor) | ||
| return 0 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add audit logging for cortex monitor.
The new monitor command doesn’t record an audit entry. Please log this operation to the history DB so it’s traceable.
🛠️ Example (using existing history API)
resource_monitor = ResourceMonitor(interval=1.0)
duration = getattr(args, "duration", None)
+
+ history = InstallationHistory()
+ history.record_installation(
+ InstallationType.CONFIG,
+ ["monitor"],
+ [f"cortex monitor --duration {duration}" if duration else "cortex monitor"],
+ datetime.now(timezone.utc),
+ )🤖 Prompt for AI Agents
In `@cortex/cli.py` around lines 64 - 83, Add an audit entry when the monitor
command runs by recording to the history DB (~/.cortex/history.db) inside the
monitor method: after parsing args and before or immediately after instantiating
ResourceMonitor (or right after console.print("System Health:")), call the
existing history API to create a record containing operation="monitor", the
provided args/duration, a timestamp, and the result code (use the export_result
or final return value). Update monitor to call the history logging function (use
the project’s history API/class you already have) and ensure failures to write
history are handled non-fatally (log a warning) so _collect_monitoring_metrics,
_display_alerts, _handle_monitor_export, and _display_recommendations keep
running.
| def _get_latest_metrics(self, monitor: ResourceMonitor) -> dict: | ||
| """Return latest collected metrics or take a fresh sample.""" | ||
| return monitor.history[-1] if monitor.history else monitor.sample() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refresh metrics instead of reusing stale samples.
Once history is non-empty, _get_latest_metrics returns the same sample forever, so installation monitoring isn’t real-time. Sample again when the interval has elapsed.
🛠️ Proposed fix
- def _get_latest_metrics(self, monitor: ResourceMonitor) -> dict:
+ def _get_latest_metrics(self, monitor: ResourceMonitor) -> dict[str, Any]:
"""Return latest collected metrics or take a fresh sample."""
- return monitor.history[-1] if monitor.history else monitor.sample()
+ if not monitor.history:
+ return monitor.sample()
+ last = monitor.history[-1]
+ if time.time() - last.get("timestamp", 0) >= monitor.interval:
+ return monitor.sample()
+ return lastAlso applies to: 1039-1046
🤖 Prompt for AI Agents
In `@cortex/cli.py` around lines 157 - 159, _get_latest_metrics currently always
returns monitor.history[-1] once history exists, producing stale readings;
change it to return monitor.history[-1] only if the last sample is still fresh,
otherwise call monitor.sample() to get a new reading. Implement this by checking
the timestamp/age of the last entry (e.g., monitor.history[-1].timestamp or
monitor.last_sample_time) against the configured interval (e.g.,
monitor.sample_interval or monitor.interval) and call monitor.sample() when the
elapsed time >= interval; otherwise return monitor.history[-1]. Update the same
logic wherever _get_latest_metrics behavior is duplicated (lines referenced
1039-1046).
| # If --monitor is used, automatically enable execution and initialize the resource monitor. | ||
| resource_monitor = None | ||
| if monitor and not execute and not dry_run: | ||
| print(f"📊 Monitoring enabled for: {software}") | ||
| print("Note: Monitoring requires execution. Auto-enabling --execute flag.") | ||
| execute = True | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t auto‑enable execution when --monitor is set.
Auto‑enabling execution bypasses the dry‑run default and may run installs unintentionally. Require explicit --execute (or exit with a clear error).
🛠️ Proposed fix
resource_monitor = None
if monitor and not execute and not dry_run:
- print(f"📊 Monitoring enabled for: {software}")
- print("Note: Monitoring requires execution. Auto-enabling --execute flag.")
- execute = True
+ self._print_error("Monitoring requires --execute. Re-run with --execute or --dry-run.")
+ return 1🤖 Prompt for AI Agents
In `@cortex/cli.py` around lines 945 - 951, The code currently auto-enables
execution when monitor is set (see variables monitor, execute, dry_run and the
resource_monitor initialization); remove the auto-enable and require explicit
--execute instead: replace the block that sets execute = True with logic that
detects monitor and not execute, prints a clear error/notice about requiring
--execute to run monitoring (preserving dry_run behavior), and then exit
non‑zero (e.g., SystemExit or exit(1)); leave resource_monitor as None unless
execution is explicitly requested.
| def export_to_json( | ||
| history: list[dict[str, Any]], | ||
| peak_usage: dict[str, float], | ||
| output_file: str, | ||
| include_recommendations: bool = False, | ||
| get_recommendations_func: Callable[[], list[str]] | None = None, | ||
| ) -> None: | ||
| """ | ||
| Export monitoring data to a JSON file. | ||
| Args: | ||
| history: List of monitoring samples | ||
| peak_usage: Peak resource usage dictionary | ||
| output_file: Path to output JSON file | ||
| include_recommendations: Whether to include performance recommendations | ||
| get_recommendations_func: Function to generate recommendations (optional) | ||
| Raises: | ||
| OSError: If file cannot be written or directory cannot be created | ||
| ValueError: If output_file is empty or None | ||
| TypeError: If history or peak_usage have wrong types | ||
| AttributeError: If get_recommendations_func is invalid when called | ||
| """ | ||
| # Input validation | ||
| if not output_file or not isinstance(output_file, str): | ||
| raise ValueError(f"Invalid output_file: {output_file!r}") | ||
|
|
||
| if not isinstance(history, list): | ||
| raise TypeError(f"history must be a list, got {type(history).__name__}") | ||
|
|
||
| if not isinstance(peak_usage, dict): | ||
| raise TypeError(f"peak_usage must be a dict, got {type(peak_usage).__name__}") | ||
|
|
||
| try: | ||
| # Ensure output directory exists | ||
| output_dir = os.path.dirname(os.path.abspath(output_file)) | ||
| if output_dir: # Only create if there's a directory component | ||
| os.makedirs(output_dir, exist_ok=True) | ||
|
|
||
| payload = { | ||
| "metadata": { | ||
| "export_timestamp": time.time(), | ||
| "export_date": time.ctime(), | ||
| "samples_count": len(history), | ||
| "format_version": "1.0", | ||
| }, | ||
| "peak_usage": peak_usage, | ||
| "samples": history, | ||
| } | ||
|
|
||
| # Add recommendations if requested | ||
| if include_recommendations and get_recommendations_func: | ||
| try: | ||
| recommendations = get_recommendations_func() | ||
| if isinstance(recommendations, list): | ||
| payload["recommendations"] = recommendations | ||
| logger.debug("Added recommendations to JSON export") | ||
| else: | ||
| logger.warning( | ||
| "get_recommendations_func returned non-list: %s", | ||
| type(recommendations).__name__, | ||
| ) | ||
| except AttributeError as exc: | ||
| logger.warning("Failed to call recommendations function: %s", exc) | ||
| except (TypeError, ValueError) as exc: | ||
| logger.warning("Error generating recommendations: %s", exc) | ||
| except Exception as exc: | ||
| logger.warning("Unexpected error generating recommendations: %s", exc) | ||
| # Continue without recommendations - don't fail the export | ||
|
|
||
| # Write JSON with proper encoding | ||
| with open(output_file, "w", encoding="utf-8") as f: | ||
| json.dump(payload, f, indent=2, default=str) | ||
|
|
||
| logger.info("JSON export successful: %s", output_file) | ||
|
|
||
| except OSError as exc: | ||
| logger.error("File system error during JSON export to %s: %s", output_file, exc) | ||
| raise | ||
| except (json.JSONDecodeError, TypeError) as exc: | ||
| logger.error("Data serialization error during JSON export: %s", exc) | ||
| raise ValueError(f"Data cannot be serialized to JSON: {exc}") from exc | ||
|
|
||
|
|
||
| def export_to_csv( | ||
| history: list[dict[str, Any]], | ||
| output_file: str, | ||
| ) -> None: | ||
| """ | ||
| Export monitoring history to a CSV file. | ||
| Args: | ||
| history: List of monitoring samples | ||
| output_file: Path to output CSV file | ||
| Raises: | ||
| OSError: If file cannot be written or directory cannot be created | ||
| ValueError: If output_file is empty or None, or history has inconsistent structure | ||
| TypeError: If history has wrong type | ||
| """ | ||
| # Input validation | ||
| if not output_file or not isinstance(output_file, str): | ||
| raise ValueError(f"Invalid output_file: {output_file!r}") | ||
|
|
||
| if not isinstance(history, list): | ||
| raise TypeError(f"history must be a list, got {type(history).__name__}") | ||
|
|
||
| try: | ||
| # Ensure output directory exists | ||
| output_dir = os.path.dirname(os.path.abspath(output_file)) | ||
| if output_dir: # Only create if there's a directory component | ||
| os.makedirs(output_dir, exist_ok=True) | ||
|
|
||
| if not history: | ||
| # Create file with standard headers for empty data | ||
| with open(output_file, "w", newline="", encoding="utf-8") as f: | ||
| # Use standard field names for empty data | ||
| writer = csv.DictWriter( | ||
| f, | ||
| fieldnames=[ | ||
| "timestamp", | ||
| "cpu_percent", | ||
| "memory_percent", | ||
| "disk_percent", | ||
| "alerts", | ||
| ], | ||
| ) | ||
| writer.writeheader() | ||
| logger.info("Empty CSV export created: %s", output_file) | ||
| return | ||
|
|
||
| # Get all possible fieldnames from all samples | ||
| fieldnames_set = set() | ||
| for sample in history: | ||
| if not isinstance(sample, dict): | ||
| raise ValueError(f"Sample must be a dict, got {type(sample).__name__}") | ||
| fieldnames_set.update(sample.keys()) | ||
|
|
||
| if not fieldnames_set: | ||
| raise ValueError("No fieldnames found in history data") | ||
|
|
||
| fieldnames = sorted(fieldnames_set) | ||
|
|
||
| with open(output_file, "w", newline="", encoding="utf-8") as f: | ||
| writer = csv.DictWriter(f, fieldnames=fieldnames) | ||
| writer.writeheader() | ||
|
|
||
| for i, sample in enumerate(history): | ||
| try: | ||
| # Convert any non-serializable values to strings | ||
| row = {} | ||
| for key in fieldnames: | ||
| value = sample.get(key) | ||
| if isinstance(value, list): | ||
| # Convert lists (like alerts) to semicolon-separated strings | ||
| row[key] = "; ".join(str(item) for item in value) | ||
| elif value is not None: | ||
| row[key] = str(value) | ||
| else: | ||
| row[key] = "" | ||
| writer.writerow(row) | ||
| except (KeyError, AttributeError) as exc: | ||
| logger.warning("Error processing sample %d: %s", i, exc) | ||
| # Skip problematic sample but continue export | ||
|
|
||
| logger.info("CSV export successful: %s (%d rows)", output_file, len(history)) | ||
|
|
||
| except OSError as exc: | ||
| logger.error("File system error during CSV export to %s: %s", output_file, exc) | ||
| raise | ||
| except csv.Error as exc: | ||
| logger.error("CSV formatting error: %s", exc) | ||
| raise ValueError(f"CSV formatting error: {exc}") from exc | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add audit logging for export operations.
JSON/CSV export writes files but does not record an audit entry in ~/.cortex/history.db. Please log export attempts and outcomes (format, path, success/failure) via the project’s audit logging mechanism. As per coding guidelines, ...
🤖 Prompt for AI Agents
In `@cortex/monitor/exporter.py` around lines 22 - 191, Add audit log entries for
every export attempt in export_to_json and export_to_csv using the project's
audit logging mechanism: call the audit API (via the project's audit logger
helper) before returning or re-raising on error to record format ("json" or
"csv"), output path (output_file), timestamp, and result ("success" or
"failure") plus an error message when exceptions occur; instrument the
try/except blocks in export_to_json and export_to_csv (use the existing logger
and exception handlers) to create and persist the audit record on both success
(after logger.info) and on error (inside each except block) without changing
existing exception flows.
| def export_monitoring_data( | ||
| monitor, | ||
| format_type: str, | ||
| output_file: str, | ||
| include_recommendations: bool = True, | ||
| ) -> bool: | ||
| """ | ||
| Convenience function to export monitoring data from a ResourceMonitor instance. | ||
| Args: | ||
| monitor: ResourceMonitor instance with get_history() and get_peak_usage() methods | ||
| format_type: 'json' or 'csv' (case-insensitive) | ||
| output_file: Path to output file (must be non-empty string) | ||
| include_recommendations: Whether to include recommendations (JSON only) | ||
| Returns: | ||
| bool: True if successful, False otherwise | ||
| Raises: | ||
| AttributeError: If monitor doesn't have required methods | ||
| ValueError: If format_type is unsupported or output_file is invalid | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring contradicts behavior.
export_monitoring_data states it raises AttributeError/ValueError, but it logs and returns False instead. Update the docstring (or re-raise) to match actual behavior.
📝 Suggested docstring fix
- Raises:
- AttributeError: If monitor doesn't have required methods
- ValueError: If format_type is unsupported or output_file is invalid
+ Notes:
+ This function logs validation errors and returns False instead of raising.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def export_monitoring_data( | |
| monitor, | |
| format_type: str, | |
| output_file: str, | |
| include_recommendations: bool = True, | |
| ) -> bool: | |
| """ | |
| Convenience function to export monitoring data from a ResourceMonitor instance. | |
| Args: | |
| monitor: ResourceMonitor instance with get_history() and get_peak_usage() methods | |
| format_type: 'json' or 'csv' (case-insensitive) | |
| output_file: Path to output file (must be non-empty string) | |
| include_recommendations: Whether to include recommendations (JSON only) | |
| Returns: | |
| bool: True if successful, False otherwise | |
| Raises: | |
| AttributeError: If monitor doesn't have required methods | |
| ValueError: If format_type is unsupported or output_file is invalid | |
| """ | |
| def export_monitoring_data( | |
| monitor, | |
| format_type: str, | |
| output_file: str, | |
| include_recommendations: bool = True, | |
| ) -> bool: | |
| """ | |
| Convenience function to export monitoring data from a ResourceMonitor instance. | |
| Args: | |
| monitor: ResourceMonitor instance with get_history() and get_peak_usage() methods | |
| format_type: 'json' or 'csv' (case-insensitive) | |
| output_file: Path to output file (must be non-empty string) | |
| include_recommendations: Whether to include recommendations (JSON only) | |
| Returns: | |
| bool: True if successful, False otherwise | |
| Notes: | |
| This function logs validation errors and returns False instead of raising. | |
| """ |
🤖 Prompt for AI Agents
In `@cortex/monitor/exporter.py` around lines 193 - 211, The docstring for
export_monitoring_data contradicts the implementation: the function currently
logs errors and returns False instead of raising AttributeError or ValueError;
update the docstring for export_monitoring_data to remove the "Raises" section
and explicitly state that on invalid inputs (missing monitor methods like
get_history/get_peak_usage, unsupported format_type, or invalid output_file) the
function logs an error and returns False, and that format_type is
case-insensitive ('json' or 'csv') and include_recommendations applies to JSON
only; alternatively, if you prefer raising, change the implementation in
export_monitoring_data to re-raise AttributeError/ValueError where validation
fails (referencing monitor.get_history()/get_peak_usage and
format_type/output_file checks) so the docstring remains accurate.
| def __init__( | ||
| self, | ||
| interval: float = 1.0, | ||
| cpu_threshold: float = DEFAULT_CPU_ALERT_THRESHOLD, | ||
| memory_threshold: float = DEFAULT_MEMORY_ALERT_THRESHOLD, | ||
| disk_threshold: float = DEFAULT_DISK_ALERT_THRESHOLD, | ||
| max_history_size: int | None = DEFAULT_MAX_HISTORY_SIZE, | ||
| ) -> None: | ||
| """ | ||
| Initialize a ResourceMonitor instance. | ||
| Args: | ||
| interval: Sampling interval in seconds (must be > 0) | ||
| cpu_threshold: CPU usage percentage that triggers alerts | ||
| memory_threshold: Memory usage percentage that triggers alerts | ||
| disk_threshold: Disk usage percentage that triggers alerts | ||
| max_history_size: Maximum number of samples to store (None = unlimited) | ||
| Raises: | ||
| ValueError: If interval <= 0 or thresholds are not in valid range (0-100) | ||
| Note: | ||
| Thresholds are expressed as percentages (0-100). Values outside | ||
| this range will be clamped to valid percentage bounds. | ||
| """ | ||
| if interval <= 0: | ||
| raise ValueError(f"Interval must be positive, got {interval}") | ||
|
|
||
| # Validate thresholds are within reasonable bounds | ||
| for name, value in [ | ||
| ("cpu_threshold", cpu_threshold), | ||
| ("memory_threshold", memory_threshold), | ||
| ("disk_threshold", disk_threshold), | ||
| ]: | ||
| if not 0 <= value <= 100: | ||
| logger.warning( | ||
| "%s %.1f%% is outside recommended range 0-100%%, " "consider adjusting", | ||
| name, | ||
| value, | ||
| ) | ||
|
|
||
| self.interval = interval | ||
| self.cpu_threshold = cpu_threshold | ||
| self.memory_threshold = memory_threshold | ||
| self.disk_threshold = disk_threshold | ||
| self.max_history_size = max_history_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clamp thresholds or fix the docstring.
The docstring promises clamping to 0–100, but the code only logs warnings. Out‑of‑range values can cause constant/never alerts. Consider clamping or update the note.
🛠️ Proposed fix (clamp to 0–100)
- self.cpu_threshold = cpu_threshold
- self.memory_threshold = memory_threshold
- self.disk_threshold = disk_threshold
+ def _clamp(value: float) -> float:
+ return max(0.0, min(100.0, value))
+
+ self.cpu_threshold = _clamp(cpu_threshold)
+ self.memory_threshold = _clamp(memory_threshold)
+ self.disk_threshold = _clamp(disk_threshold)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__( | |
| self, | |
| interval: float = 1.0, | |
| cpu_threshold: float = DEFAULT_CPU_ALERT_THRESHOLD, | |
| memory_threshold: float = DEFAULT_MEMORY_ALERT_THRESHOLD, | |
| disk_threshold: float = DEFAULT_DISK_ALERT_THRESHOLD, | |
| max_history_size: int | None = DEFAULT_MAX_HISTORY_SIZE, | |
| ) -> None: | |
| """ | |
| Initialize a ResourceMonitor instance. | |
| Args: | |
| interval: Sampling interval in seconds (must be > 0) | |
| cpu_threshold: CPU usage percentage that triggers alerts | |
| memory_threshold: Memory usage percentage that triggers alerts | |
| disk_threshold: Disk usage percentage that triggers alerts | |
| max_history_size: Maximum number of samples to store (None = unlimited) | |
| Raises: | |
| ValueError: If interval <= 0 or thresholds are not in valid range (0-100) | |
| Note: | |
| Thresholds are expressed as percentages (0-100). Values outside | |
| this range will be clamped to valid percentage bounds. | |
| """ | |
| if interval <= 0: | |
| raise ValueError(f"Interval must be positive, got {interval}") | |
| # Validate thresholds are within reasonable bounds | |
| for name, value in [ | |
| ("cpu_threshold", cpu_threshold), | |
| ("memory_threshold", memory_threshold), | |
| ("disk_threshold", disk_threshold), | |
| ]: | |
| if not 0 <= value <= 100: | |
| logger.warning( | |
| "%s %.1f%% is outside recommended range 0-100%%, " "consider adjusting", | |
| name, | |
| value, | |
| ) | |
| self.interval = interval | |
| self.cpu_threshold = cpu_threshold | |
| self.memory_threshold = memory_threshold | |
| self.disk_threshold = disk_threshold | |
| self.max_history_size = max_history_size | |
| def __init__( | |
| self, | |
| interval: float = 1.0, | |
| cpu_threshold: float = DEFAULT_CPU_ALERT_THRESHOLD, | |
| memory_threshold: float = DEFAULT_MEMORY_ALERT_THRESHOLD, | |
| disk_threshold: float = DEFAULT_DISK_ALERT_THRESHOLD, | |
| max_history_size: int | None = DEFAULT_MAX_HISTORY_SIZE, | |
| ) -> None: | |
| """ | |
| Initialize a ResourceMonitor instance. | |
| Args: | |
| interval: Sampling interval in seconds (must be > 0) | |
| cpu_threshold: CPU usage percentage that triggers alerts | |
| memory_threshold: Memory usage percentage that triggers alerts | |
| disk_threshold: Disk usage percentage that triggers alerts | |
| max_history_size: Maximum number of samples to store (None = unlimited) | |
| Raises: | |
| ValueError: If interval <= 0 or thresholds are not in valid range (0-100) | |
| Note: | |
| Thresholds are expressed as percentages (0-100). Values outside | |
| this range will be clamped to valid percentage bounds. | |
| """ | |
| if interval <= 0: | |
| raise ValueError(f"Interval must be positive, got {interval}") | |
| # Validate thresholds are within reasonable bounds | |
| for name, value in [ | |
| ("cpu_threshold", cpu_threshold), | |
| ("memory_threshold", memory_threshold), | |
| ("disk_threshold", disk_threshold), | |
| ]: | |
| if not 0 <= value <= 100: | |
| logger.warning( | |
| "%s %.1f%% is outside recommended range 0-100%%, " "consider adjusting", | |
| name, | |
| value, | |
| ) | |
| self.interval = interval | |
| def _clamp(value: float) -> float: | |
| return max(0.0, min(100.0, value)) | |
| self.cpu_threshold = _clamp(cpu_threshold) | |
| self.memory_threshold = _clamp(memory_threshold) | |
| self.disk_threshold = _clamp(disk_threshold) | |
| self.max_history_size = max_history_size |
🤖 Prompt for AI Agents
In `@cortex/monitor/resource_monitor.py` around lines 49 - 91, The __init__
currently only logs when cpu_threshold, memory_threshold, or disk_threshold are
outside 0–100 despite the docstring saying they are clamped; modify __init__ to
actually clamp each of cpu_threshold, memory_threshold, and disk_threshold to
the 0–100 range (e.g., value = max(0, min(100, value))) after validation and
update the logger call to note that the value was clamped instead of only
warning, then assign the clamped values to self.cpu_threshold,
self.memory_threshold, and self.disk_threshold so the rest of ResourceMonitor
uses safe percentage values.
| def monitor(self, duration: float | None = None) -> None: | ||
| """ | ||
| Run continuous monitoring for specified duration. | ||
| Args: | ||
| duration: Monitoring duration in seconds. If None, runs until | ||
| interrupted (typically by KeyboardInterrupt). | ||
| Raises: | ||
| KeyboardInterrupt: If monitoring interrupted by user | ||
| OSError: If system metrics cannot be accessed | ||
| RuntimeError: If monitoring loop encounters fatal error | ||
| Note: | ||
| Use Ctrl+C to interrupt monitoring when duration is None. | ||
| First sample may have 0.0 for disk/network rates as they | ||
| require a previous sample for calculation. | ||
| """ | ||
| start_time = time.time() | ||
|
|
||
| try: | ||
| while True: | ||
| if duration and (time.time() - start_time) >= duration: | ||
| break | ||
| self.sample() | ||
| time.sleep(self.interval) | ||
|
|
||
| except KeyboardInterrupt: | ||
| logger.info("Monitoring interrupted by user") | ||
| except (OSError, RuntimeError, ValueError) as exc: | ||
| logger.error("Monitoring error: %s", exc) | ||
| raise | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against non‑positive duration.
duration=0 (or negative) is treated as falsy, so the loop won’t stop. Add an explicit check to avoid an unintended infinite run.
🛠️ Proposed fix
def monitor(self, duration: float | None = None) -> None:
"""
Run continuous monitoring for specified duration.
@@
"""
+ if duration is not None and duration <= 0:
+ raise ValueError(f"Duration must be positive, got {duration}")
start_time = time.time()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def monitor(self, duration: float | None = None) -> None: | |
| """ | |
| Run continuous monitoring for specified duration. | |
| Args: | |
| duration: Monitoring duration in seconds. If None, runs until | |
| interrupted (typically by KeyboardInterrupt). | |
| Raises: | |
| KeyboardInterrupt: If monitoring interrupted by user | |
| OSError: If system metrics cannot be accessed | |
| RuntimeError: If monitoring loop encounters fatal error | |
| Note: | |
| Use Ctrl+C to interrupt monitoring when duration is None. | |
| First sample may have 0.0 for disk/network rates as they | |
| require a previous sample for calculation. | |
| """ | |
| start_time = time.time() | |
| try: | |
| while True: | |
| if duration and (time.time() - start_time) >= duration: | |
| break | |
| self.sample() | |
| time.sleep(self.interval) | |
| except KeyboardInterrupt: | |
| logger.info("Monitoring interrupted by user") | |
| except (OSError, RuntimeError, ValueError) as exc: | |
| logger.error("Monitoring error: %s", exc) | |
| raise | |
| def monitor(self, duration: float | None = None) -> None: | |
| """ | |
| Run continuous monitoring for specified duration. | |
| Args: | |
| duration: Monitoring duration in seconds. If None, runs until | |
| interrupted (typically by KeyboardInterrupt). | |
| Raises: | |
| KeyboardInterrupt: If monitoring interrupted by user | |
| OSError: If system metrics cannot be accessed | |
| RuntimeError: If monitoring loop encounters fatal error | |
| Note: | |
| Use Ctrl+C to interrupt monitoring when duration is None. | |
| First sample may have 0.0 for disk/network rates as they | |
| require a previous sample for calculation. | |
| """ | |
| if duration is not None and duration <= 0: | |
| raise ValueError(f"Duration must be positive, got {duration}") | |
| start_time = time.time() | |
| try: | |
| while True: | |
| if duration and (time.time() - start_time) >= duration: | |
| break | |
| self.sample() | |
| time.sleep(self.interval) | |
| except KeyboardInterrupt: | |
| logger.info("Monitoring interrupted by user") | |
| except (OSError, RuntimeError, ValueError) as exc: | |
| logger.error("Monitoring error: %s", exc) | |
| raise |
🤖 Prompt for AI Agents
In `@cortex/monitor/resource_monitor.py` around lines 270 - 299, The monitor
method currently treats 0 or negative duration as falsy and will run
indefinitely; add an explicit guard at the start of monitor to validate
duration: if duration is not None and duration <= 0, raise a ValueError (or
return early) with a clear message; place this check before computing start_time
so the loop (and calls to self.sample()/time.sleep(self.interval)) are never
entered when duration is non‑positive. Reference symbols: monitor, duration,
start_time, self.sample, self.interval.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ThankaBharathi Address coderabbitai comments.




Related Issue
Closes #617
Summary
This PR introduces real-time system resource monitoring to Cortex.
Key highlights:
cortex monitorandcortex install --monitorThis feature improves visibility into system impact during installations and helps users diagnose performance bottlenecks.
AI Disclosure
Used ChatGPT to assist with debugging test failures, refining CLI logic, and validating edge cases related to monitoring, export handling, and Git workflows.
Checklist
type(scope): descriptionpytest tests/)Demo
issue_51.mp4
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.