-
-
Notifications
You must be signed in to change notification settings - Fork 50
feat(tarball): Add tarball/source build helper (Issue #452) #558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(tarball): Add tarball/source build helper (Issue #452) #558
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a Tarball/Source Build Helper and CLI integration: new Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as cortex/cli.py
participant Helper as TarballHelper
participant FS as Source\ Files
participant Sys as pkg-config/apt/dpkg
User->>CLI: tarball analyze <source_dir>
CLI->>Helper: run_analyze_command(source_dir)
Helper->>FS: read manifests (CMakeLists, configure.ac, meson.build, setup.py, Makefile)
FS-->>Helper: file contents
Helper->>Helper: detect build system & parse deps
Helper->>Sys: check_installed(dependencies)
Sys-->>Helper: installed status
Helper-->>CLI: analysis results
CLI-->>User: display analysis
sequenceDiagram
participant User
participant CLI as cortex/cli.py
participant Helper as TarballHelper
participant PkgMgr as apt / apt-cache
participant History as ~/.cortex/manual_builds.json
User->>CLI: tarball install-deps <source_dir> [--dry-run]
CLI->>Helper: run_install_deps_command(source_dir, dry_run)
Helper->>Helper: determine missing packages
Helper->>PkgMgr: apt-get install (or simulate)
PkgMgr-->>Helper: result
Helper-->>CLI: report install outcome
User->>CLI: tarball track <name> <source_dir> [--packages ...]
CLI->>Helper: run_track_command(name, source_dir, packages)
Helper->>History: _load_history()
Helper->>History: _save_history(updated)
Helper-->>CLI: confirm tracked
User->>CLI: tarball cleanup <name> [--dry-run]
CLI->>Helper: run_cleanup_command(name, dry_run)
Helper->>History: _load_history()
Helper->>PkgMgr: remove packages (or simulate)
Helper->>History: _save_history(updated)
Helper-->>CLI: cleanup result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification PassedAll contributors have signed the CLA.
|
Summary of ChangesHello @jeremylongshore, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a powerful new cortex tarball command-line utility designed to simplify the often complex process of building software from source code. It provides intelligent analysis of various build systems to identify dependencies, automates the installation of missing development packages, and offers robust tracking and cleanup mechanisms for manual installations. This feature aims to significantly reduce the friction and manual effort involved in managing custom-built software. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a powerful tarball helper to analyze and manage dependencies for source builds. The implementation is well-structured with a dedicated tarball_helper module and comprehensive test coverage. The code is clear and the feature is a great addition to the CLI.
My review focuses on improving the robustness of the implementation. I've identified some areas for improvement:
- The dependency parsing logic in
tarball_helper.pyuses regular expressions that are too simple for complex, real-world build files, which could lead to missed dependencies. - The test suite for this parsing logic could be expanded to cover these more complex cases.
- In
cli.py, the exception handling is overly broad, and a minor logic improvement can make package name parsing more robust.
Overall, this is a very strong contribution. Addressing these points will make the new feature even more reliable.
| content = cmake_file.read_text(encoding="utf-8", errors="ignore") | ||
|
|
||
| # Find find_package() calls | ||
| for match in re.finditer(r"find_package\s*\(\s*(\w+)", content, re.IGNORECASE): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regular expression used for find_package is a bit too simple and may fail on common CMake syntax, leading to missed dependencies. For example, it won't match package names containing hyphens or numbers if not at the end (e.g., Qt5-Core), and it doesn't correctly handle arguments that follow the package name.
A more robust regex that captures the first argument would handle more cases. A similar issue exists with the pkg_check_modules regex on line 261, which can fail to find all packages if multiple are listed.
| for match in re.finditer(r"find_package\s*\(\s*(\w+)", content, re.IGNORECASE): | |
| for match in re.finditer(r"find_package\s*\(\s*([^\s\)]+)", content, re.IGNORECASE): |
| if action == "analyze": | ||
| try: | ||
| run_analyze_command(args.source_dir) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "install-deps": | ||
| try: | ||
| run_install_deps_command( | ||
| args.source_dir, | ||
| dry_run=getattr(args, "dry_run", False), | ||
| ) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "track": | ||
| try: | ||
| packages = [] | ||
| if hasattr(args, "packages") and args.packages: | ||
| packages = args.packages.split(",") | ||
| run_track_command(args.name, args.source_dir, packages) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "list": | ||
| try: | ||
| run_list_command() | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "cleanup": | ||
| try: | ||
| run_cleanup_command( | ||
| args.name, | ||
| dry_run=getattr(args, "dry_run", False), | ||
| ) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The repeated use of except Exception as e is too broad. It can catch and hide unexpected errors, including SystemExit or KeyboardInterrupt, making debugging difficult. It's better to catch more specific exceptions that you expect from the tarball_helper functions (like ValueError or FileNotFoundError) and let other exceptions propagate to a higher-level handler. If you must catch a broad exception, consider logging the full traceback when in verbose mode to aid debugging.
cortex/cli.py
Outdated
| try: | ||
| packages = [] | ||
| if hasattr(args, "packages") and args.packages: | ||
| packages = args.packages.split(",") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| class TestAnalyzeCMake: | ||
| """Test CMake analysis.""" | ||
|
|
||
| def test_analyze_cmake_find_package(self, tmp_path): | ||
| cmake_content = """ | ||
| cmake_minimum_required(VERSION 3.10) | ||
| project(MyProject) | ||
| find_package(OpenSSL REQUIRED) | ||
| find_package(ZLIB) | ||
| """ | ||
| (tmp_path / "CMakeLists.txt").write_text(cmake_content) | ||
|
|
||
| helper = TarballHelper() | ||
| with patch.object(helper, "_check_installed"): | ||
| analysis = helper.analyze(tmp_path) | ||
|
|
||
| assert analysis.build_system == BuildSystem.CMAKE | ||
| pkg_names = [d.name for d in analysis.dependencies] | ||
| assert "openssl" in pkg_names | ||
| assert "zlib" in pkg_names | ||
|
|
||
| def test_analyze_cmake_pkg_check_modules(self, tmp_path): | ||
| cmake_content = """ | ||
| pkg_check_modules(GLIB REQUIRED glib-2.0) | ||
| """ | ||
| (tmp_path / "CMakeLists.txt").write_text(cmake_content) | ||
|
|
||
| helper = TarballHelper() | ||
| with patch.object(helper, "_check_installed"): | ||
| analysis = helper.analyze(tmp_path) | ||
|
|
||
| pkg_names = [d.name for d in analysis.dependencies] | ||
| assert "glib-2.0" in pkg_names | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests for _analyze_cmake are good, but they only cover simple cases that the current regular expressions can handle. To make the test suite more robust and catch the fragility of the regex-based parsing, consider adding test cases with more complex, real-world CMakeLists.txt syntax.
Examples of valuable test cases to add:
find_packagewith package names containing numbers and hyphens (e.g.,find_package(Qt5-Core)).pkg_check_moduleswith multiple packages listed (e.g.,pkg_check_modules(GSTREAMER gstreamer-1.0 gstreamer-video-1.0)).find_packagewith version numbers and components (e.g.,find_package(Boost 1.71.0 REQUIRED COMPONENTS program_options)).
Adding these would help ensure the dependency analysis is reliable across a wider range of projects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
tests/test_tarball_helper.py (2)
264-277: Consider adding assertion for the subprocess command.The success and failure tests mock
subprocess.runbut don't verify the command arguments. Consider adding an assertion to confirm the correct apt-get command is constructed.💡 Optional enhancement
def test_install_deps_success(self): helper = TarballHelper() mock_result = MagicMock(returncode=0) - with patch("subprocess.run", return_value=mock_result): + with patch("subprocess.run", return_value=mock_result) as mock_run: result = helper.install_dependencies(["zlib1g-dev"]) + mock_run.assert_called_once() + assert "apt-get" in mock_run.call_args[0][0] assert result is True
375-410: Run command tests verify no exceptions are raised but lack output assertions.These tests confirm the commands execute without error, which is valuable. For higher confidence, consider capturing console output and verifying expected content is printed.
cortex/tarball_helper.py (5)
317-324: AC_CHECK_LIB mapping may not always find apt packages.The code tries to find apt package as
lib{lib_name}inPKGCONFIG_PACKAGES, but many libraries (e.g.,z→zlib,crypto→libssl-dev) don't follow this pattern. The mapping may miss some packages.This is a minor limitation since the tool still identifies the dependency; apt package suggestion just won't always work.
Consider expanding the library-to-package mapping for common cases like
z→zlib1g-dev,crypto→libssl-dev.
374-386: Consider adding timeout to subprocess calls.The
subprocess.runcalls forpkg-configanddpkg-querydon't specify a timeout. While these are typically fast, in edge cases (broken dpkg, stale locks) they could hang indefinitely.💡 Add timeout to prevent potential hangs
elif dep.dep_type == "pkg-config" and dep.apt_package: # Check via pkg-config result = subprocess.run( ["pkg-config", "--exists", dep.name], capture_output=True, + timeout=5, ) dep.found = result.returncode == 0 elif dep.apt_package: # Check if apt package is installed result = subprocess.run( ["dpkg-query", "-W", "-f=${Status}", dep.apt_package], capture_output=True, text=True, + timeout=5, )
448-451: Consider providing error feedback on installation failure.When
apt-get installfails, the method returnsFalsebut doesn't provide any feedback to the user about what went wrong. The apt output would have been shown, but a follow-up message might help.💡 Optional: Add failure message
result = subprocess.run(cmd) - return result.returncode == 0 + if result.returncode != 0: + console.print("[red]Package installation failed. Check the output above for details.[/red]") + return False + return True
552-565: History removal happens before package removal confirmation.The installation is removed from history (line 553-554) before the package removal prompt. If the user declines package removal or if it fails, the history is still lost.
Consider: Should the tracking entry be preserved if the user declines to remove packages?
💡 Optional: Delay history removal
- # Remove from history - del history[name] - self._save_history(history) - - console.print(f"[green]Removed '{name}' from tracking.[/green]") - if packages: remove_pkgs = Confirm.ask( f"Remove {len(packages)} packages that were installed for this build?" ) if remove_pkgs: cmd = ["sudo", "apt-get", "remove", "-y"] + packages subprocess.run(cmd) + # Remove from history after user decisions + del history[name] + self._save_history(history) + console.print(f"[green]Removed '{name}' from tracking.[/green]") + return True
577-580: Consider specifying encoding for write_text.
_load_historydoesn't specify encoding forread_text(), and_save_historydoesn't specify encoding forwrite_text(). While Python 3 defaults to UTF-8 on most systems, explicit encoding ensures consistency.💡 Optional: Explicit encoding
def _load_history(self) -> dict: """Load installation history from file.""" if not self.history_file.exists(): return {} try: - return json.loads(self.history_file.read_text()) + return json.loads(self.history_file.read_text(encoding="utf-8")) except (json.JSONDecodeError, OSError): return {} def _save_history(self, history: dict) -> None: """Save installation history to file.""" - self.history_file.write_text(json.dumps(history, indent=2)) + self.history_file.write_text(json.dumps(history, indent=2), encoding="utf-8")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/tarball_helper.pytests/test_tarball_helper.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pytests/test_tarball_helper.pycortex/tarball_helper.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_tarball_helper.py
🧬 Code graph analysis (2)
cortex/cli.py (1)
cortex/tarball_helper.py (5)
run_analyze_command(582-647)run_cleanup_command(699-707)run_install_deps_command(650-670)run_list_command(673-696)run_track_command(710-726)
cortex/tarball_helper.py (1)
cortex/branding.py (1)
cx_header(82-88)
🔇 Additional comments (18)
cortex/cli.py (3)
604-685: LGTM! Clean implementation following existing patterns.The
tarball()method correctly:
- Uses lazy imports to avoid loading
tarball_helperunless needed- Handles each subcommand with try/except for error resilience
- Returns appropriate exit codes (0 for success, 1 for error)
- Provides helpful usage info when no action is specified
The structure is consistent with existing command handlers like
sandbox()andnotify().
2357-2399: Argument parser definitions are well-structured.The tarball subcommands are defined consistently with other CLI subcommands. The
--packagesargument fortrackuses comma-separated values which is handled in thetarball()method viaargs.packages.split(",").
2662-2663: Command routing is correct.tests/test_tarball_helper.py (6)
1-25: Well-organized test file with comprehensive imports.The test file follows good practices:
- Clear module docstring explaining purpose
- All necessary imports from the module under test
- Proper use of pytest and unittest.mock
27-74: Good dataclass and enum tests.These tests verify the basic data structures work correctly with default values and full initialization.
85-131: Comprehensive build system detection tests.All six build system types are tested, plus the error case for invalid paths. The tests correctly use
tmp_pathfor isolation.
133-218: Analysis tests cover the main parsing patterns well.The tests mock
_check_installedappropriately to avoid external tool calls during testing, while still verifying the parsing logic extracts dependencies correctly.
451-827: Excellent edge case coverage.The advanced test classes thoroughly cover:
- Empty build files
- Missing configuration files
- Corrupt JSON history
- Command-line scenarios with missing directories
- Various fallback behaviors
This provides good confidence in error handling paths.
829-830: Good test entry point.cortex/tarball_helper.py (9)
1-24: Well-structured module with clear imports and organization.The module docstring explains the purpose, and imports are organized logically. Using
richfor console output aligns with the rest of the codebase.
26-94: Well-designed data models with proper type hints and docstrings.The dataclasses correctly:
- Use
field(default_factory=list)for mutable default arguments- Include comprehensive docstrings as required by the coding guidelines
- Use
Optional[str]for nullable fields
96-171: Good class design with comprehensive package mappings.The class follows good patterns:
- Clear docstring explaining purpose
- Class-level constants for mappings (26 common packages)
- Safe directory creation in
__init__
172-199: Clean build system detection with sensible priority order.The detection order prioritizes more specific build systems (CMake, Meson) before generic ones (Makefile), which is correct since a project might have both.
200-243: Well-structured analysis orchestration.The
analyzemethod cleanly:
- Resolves paths to absolute
- Delegates to specific analyzers
- Checks installation status
- Collects missing packages (with deduplication)
- Generates appropriate build commands
244-282: Good CMake parsing for common patterns.The regex patterns correctly handle:
find_package(OpenSSL REQUIRED)pkg_check_modules(GLIB REQUIRED glib-2.0)CHECK_INCLUDE_FILE("zlib.h" HAVE_ZLIB)Using
errors="ignore"is appropriate for potentially mixed-encoding build files.
453-488: Good alternative package detection with sensible fallbacks.The method:
- First tries exact name match
- Falls back to
lib{name}prefix- Prefers
-devpackages for build dependenciesThis covers common scenarios well.
582-648: Well-formatted analysis output with rich components.The
run_analyze_commandfunction provides excellent user experience:
- Shows build system detected
- Suggests packaged alternatives when available
- Displays dependencies in a formatted table
- Shows missing packages with install command
- Lists build commands
650-726: Clean command functions delegating to TarballHelper.All run_* functions follow a consistent pattern:
- Validate inputs
- Create TarballHelper instance
- Delegate to appropriate method
- Handle output/feedback
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremylongshore Documentation missing, also Kindly address bot comments.
|
Thanks for the review! I've addressed all the feedback: Bot comment fixes:
Documentation:
All 71 tests passing with 99% code coverage. Ready for re-review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
docs/TARBALL_HELPER.md (1)
120-120: Consider using lowercase for CMake function names.While CMake is case-insensitive, the convention is to use lowercase for function names.
check_include_file()is more commonly seen in CMake documentation thanCHECK_INCLUDE_FILE().📝 Minor documentation improvement
-| CMake | `CMakeLists.txt` | `find_package()`, `pkg_check_modules()`, `CHECK_INCLUDE_FILE()` | +| CMake | `CMakeLists.txt` | `find_package()`, `pkg_check_modules()`, `check_include_file()` |
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/tarball_helper.pydocs/TARBALL_HELPER.md
🚧 Files skipped from review as they are similar to previous changes (1)
- cortex/tarball_helper.py
🔇 Additional comments (6)
docs/TARBALL_HELPER.md (6)
1-52: Excellent introduction and quick start section.The overview clearly communicates the tool's purpose and key features. The quick start example is comprehensive, showing the complete workflow from analysis through cleanup with realistic output. The progression is logical and immediately demonstrates value to users.
54-115: Clear and comprehensive command documentation.All five commands are well documented with usage examples and options. The structure is consistent, making it easy for users to find the information they need.
126-161: Build commands are accurate and helpful.The build commands for each system follow standard practices and include helpful notes (like the autoreconf step for Autotools). This will guide users effectively through the build process.
162-200: Dependency mappings are accurate and comprehensive.The header-to-package, pkg-config-to-package, and build-tool mappings are correct for Debian/Ubuntu systems. This will be very helpful for users trying to resolve build dependencies.
201-246: Storage format and workflow example are well documented.The JSON storage format is clearly documented with all fields explained. The nginx workflow example is particularly valuable—it walks through a complete real-world scenario from download to cleanup, including the helpful reminder about considering packaged alternatives.
248-283: Limitations and troubleshooting sections add valuable context.The documentation honestly acknowledges the tool's limitations, which helps set appropriate user expectations. The troubleshooting section addresses likely pain points with practical solutions, including correct apt-cache commands for manual investigation.
Adding myself to enable CLA checks on PRs cortexlinux#557 and cortexlinux#558. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
@Anshgrover23 I've addressed all feedback:
Once #568 is merged, the CLA check will pass. Ready for re-review! |
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremylongshore CI is failing.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremylongshore Address coderabbit comments.
|
CLA PR #568 has been merged. Could you please re-run the CLA check? Thanks! |
|
recheck |
|
Same issue as #557 - the CLA failure is due to Could |
Adding myself to enable CLA checks on PRs cortexlinux#557 and cortexlinux#558. Co-authored-by: jeremylongshore <jeremylongshore@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Implements issue cortexlinux#452 - Tarball Helper with 4 key features: 1. analyze: Detect build system (CMake, Meson, Autotools, Python, Make) and parse dependencies from build files 2. install-deps: Install missing -dev packages via apt 3. track: Record manual installations with packages used 4. cleanup: Remove tracked installations and optionally packages Features: - Auto-detects 6 build systems (CMake, Meson, Autotools, Make, Python, Unknown) - Parses find_package(), pkg_check_modules(), dependency(), AC_CHECK_* - Maps headers and pkg-config names to apt packages - Suggests packaged alternatives before building from source - Tracks installation history in ~/.cortex/manual_builds.json Demo: https://asciinema.org/a/rFzQXtU4HqmrRscL
- Add timeouts (5s) to pkg-config and dpkg-query subprocess calls - Add error feedback message when apt-get install fails - Fix history removal timing: move deletion after user interactions - Add explicit UTF-8 encoding to history file read/write operations - Add comprehensive documentation in docs/TARBALL_HELPER.md
a7cb1a8 to
2b20c2a
Compare
|
Rebased and removed the Co-Authored-By trailers. CLA check is now passing. Ready for review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @cortex/tarball_helper.py:
- Around line 51-53: Change the type annotation for the variable apt_package
from Optional[str] to the PEP 604 union style str | None, and then remove the
now-unused Optional import from the top-level imports as well as the other
unused Optional import elsewhere in the module (the remaining import that was
only used for Optional). Ensure no other references to Optional remain before
committing.
- Around line 466-500: The return type annotation for find_alternative should
use the PEP 604 union syntax instead of Optional: change the signature from def
find_alternative(self, name: str) -> Optional[str]: to def
find_alternative(self, name: str) -> str | None: and update imports/usages
accordingly (remove or replace typing.Optional import and any other references
to Optional in this file) so the linter stops flagging the annotation.
🧹 Nitpick comments (4)
cortex/tarball_helper.py (4)
366-394: Consider handling missingpkg-configordpkg-querycommands.If
pkg-configordpkg-queryare not installed,subprocess.runwill raiseFileNotFoundError. This could cause the entire analysis to fail on minimal systems.♻️ Proposed fix to handle missing commands gracefully
elif dep.dep_type == "pkg-config" and dep.apt_package: # Check via pkg-config try: result = subprocess.run( ["pkg-config", "--exists", dep.name], capture_output=True, timeout=5, ) dep.found = result.returncode == 0 - except subprocess.TimeoutExpired: + except (subprocess.TimeoutExpired, FileNotFoundError): dep.found = False elif dep.apt_package: # Check if apt package is installed try: result = subprocess.run( ["dpkg-query", "-W", "-f=${Status}", dep.apt_package], capture_output=True, text=True, timeout=5, ) dep.found = "install ok installed" in result.stdout - except subprocess.TimeoutExpired: + except (subprocess.TimeoutExpired, FileNotFoundError): dep.found = False
565-580: Consider checking the result of package removal.The
apt-get removecommand result is not checked. If removal fails, the user won't get feedback, yet the installation is still removed from history.♻️ Proposed improvement
if packages: remove_pkgs = Confirm.ask( f"Remove {len(packages)} packages that were installed for this build?" ) if remove_pkgs: cmd = ["sudo", "apt-get", "remove", "-y"] + packages - subprocess.run(cmd) + result = subprocess.run(cmd) + if result.returncode != 0: + console.print("[yellow]Warning: Package removal may have failed.[/yellow]") # Remove from history after all user interactions del history[name] self._save_history(history)
591-593: Consider handling write errors in_save_history.If the write fails (disk full, permission denied), the exception will propagate up and could leave the system in an inconsistent state where packages were removed but history wasn't updated.
♻️ Suggested improvement
def _save_history(self, history: dict) -> None: """Save installation history to file.""" - self.history_file.write_text(json.dumps(history, indent=2), encoding="utf-8") + try: + self.history_file.write_text(json.dumps(history, indent=2), encoding="utf-8") + except OSError as e: + console.print(f"[red]Warning: Failed to save history: {e}[/red]")
617-619: Name extraction may be inaccurate for hyphenated software names.The logic
path.name.split("-")[0]assumes the patternname-version, but software likenginx-1.25.3works whilemy-cool-app-1.0would extract justmyinstead ofmy-cool-app.This is a heuristic and may not always work perfectly. Consider documenting this limitation or using a more sophisticated version-detection regex if accuracy is important.
# Alternative approach - split on version-like patterns import re dir_name = re.split(r'-\d', path.name)[0] # Split on hyphen followed by digit
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/cli.pycortex/tarball_helper.pydocs/TARBALL_HELPER.mdtests/test_tarball_helper.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_tarball_helper.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/tarball_helper.pycortex/cli.py
🧬 Code graph analysis (2)
cortex/tarball_helper.py (1)
cortex/branding.py (1)
cx_header(82-88)
cortex/cli.py (1)
cortex/tarball_helper.py (5)
run_analyze_command(596-661)run_cleanup_command(713-721)run_install_deps_command(664-684)run_list_command(687-710)run_track_command(724-740)
🪛 GitHub Actions: CI
cortex/tarball_helper.py
[error] 51-51: ruff check failed: UP045 Use X | None for type annotations.
🪛 GitHub Check: lint
cortex/tarball_helper.py
[failure] 466-466: Ruff (UP045)
cortex/tarball_helper.py:466:46: UP045 Use X | None for type annotations
[failure] 51-51: Ruff (UP045)
cortex/tarball_helper.py:51:18: UP045 Use X | None for type annotations
🪛 GitHub Check: Lint
cortex/tarball_helper.py
[failure] 466-466: Ruff (UP045)
cortex/tarball_helper.py:466:46: UP045 Use X | None for type annotations
[failure] 51-51: Ruff (UP045)
cortex/tarball_helper.py:51:18: UP045 Use X | None for type annotations
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (8)
docs/TARBALL_HELPER.md (1)
1-283: LGTM! Comprehensive and well-structured documentation.The documentation thoroughly covers the Tarball Helper feature including usage examples, supported build systems, dependency mappings, and troubleshooting guidance. The content aligns with the implementation in
cortex/tarball_helper.py.cortex/cli.py (3)
604-685: LGTM! Well-structured command handler following existing patterns.The
tarball()method:
- Uses lazy imports for
cortex.tarball_helperto avoid impacting CLI startup time- Follows the same dispatch pattern as
sandbox()and other handlers- Includes proper docstring with type hints
- Provides helpful fallback help text when no action is specified
2357-2399: LGTM! Argument parser configuration is complete and consistent.The tarball subcommand parser follows the established patterns used for sandbox and other command groups. All five subcommands (analyze, install-deps, track, list, cleanup) are properly configured with appropriate arguments.
2662-2663: LGTM! Command routing correctly wired.The tarball command is properly routed to
cli.tarball(args)in the main dispatch block.cortex/tarball_helper.py (4)
56-93: LGTM! Dataclasses are well-defined.Proper use of
field(default_factory=list)for mutable default values, and comprehensive docstrings for all attributes.
167-198: LGTM! Initialization and build system detection are sound.The
__init__safely creates the config directory, anddetect_build_systemproperly validates the input and checks build files in a sensible priority order.
200-242: LGTM! Analysis flow is well-structured.The
analyzemethod properly dispatches to build-system-specific analyzers. Note thatBuildSystem.MAKEintentionally has no specialized analyzer since Makefiles don't have standardized dependency declarations.
724-740: LGTM! Track command is straightforward and correct.Properly creates a
ManualInstallrecord with current timestamp and delegates totrack_installation.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremylongshore Your SonarQubeCloud Quality Gate is failing kindly address that issues. Also address coderabbit comments that are still open.
Adding myself to enable CLA checks on PRs cortexlinux#557 and cortexlinux#558. Co-authored-by: jeremylongshore <jeremylongshore@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Adding myself to enable CLA checks on PRs cortexlinux#557 and cortexlinux#558.
Security fixes (CRITICAL):
- Add re.escape() for apt-cache regex injection (CVE-worthy)
- Add timeouts to all subprocess calls (APT_CACHE_TIMEOUT=10s)
- Change errors='ignore' to errors='replace' for file reads
- Fix race condition in cleanup_installation (reload before save)
Security fixes (HIGH):
- Add prefix path validation (VALID_PREFIX_PATTERNS)
- Add MAX_FILE_SIZE (5MB) limit for file reads
- Limit regex repetitions to prevent catastrophic backtracking
- Add PermissionError handling with helpful messages
- Check shutil.which("apt-get") before calling
Quality improvements:
- Add HISTORY_SCHEMA_VERSION for data migration
- Deduplicate dependencies using seen_packages set
- Add safer build command defaults (--prefix, --user, venv)
- Detect tarball files and suggest extraction
- Add progress indicators for slow operations
Tests:
- Update test assertions for new build commands
- Add test_llm_router_unit.py (17 mock-based tests)
- All 1163 tests pass, 61.93% coverage
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Apply black formatting to tarball_helper.py and test_tarball_helper.py to comply with CI lint requirements. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@AAR-2026-01-12.md`:
- Around line 188-190: The report header contains an un-executed shell
substitution "2026-01-12T$(date +%H:%M:%S)"; locate that exact string in
AAR-2026-01-12.md (the report timestamp line) and replace the "$(date
+%H:%M:%S)" portion with the actual generation time (e.g., 2026-01-12T14:23:05)
or remove the time fragment entirely so the line reads a complete static
timestamp.
- Around line 29-63: The AAR file AAR-2026-01-12.md contains detailed
vulnerability disclosures (lines describing systemd_helper.py:464,
tarball_helper.py:475/485/249/294/339/564 and exact fixes) that violate the repo
SECURITY.md; remove the detailed vulnerability descriptions and either replace
them with a brief summary statement like "5 CRITICAL issues in systemd
integration and dependency handling—all fixed and tested" or, alternatively,
update SECURITY.md to explicitly permit post-fix public disclosures and state
the allowed level of detail; ensure you edit AAR-2026-01-12.md to strip specific
file/line references and exploit details and, if choosing the policy change
route, add a clear clause in SECURITY.md authorizing publishing fixed
vulnerabilities and the rationale so future contributors know the policy.
In `@cortex/cli.py`:
- Around line 631-668: The install-deps and cleanup branches currently pass
dry_run=getattr(args, "dry_run", False) which makes destructive behavior the
default; change both to use an inverted execute flag so dry-run is the default
by passing dry_run=not getattr(args, "execute", False). Update the
"install-deps" branch where run_install_deps_command(...) is called and the
"cleanup" branch where run_cleanup_command(...) is called to use dry_run=not
getattr(args, "execute", False) and ensure any CLI option names are switched
from --dry-run to --execute elsewhere so the code reads dry_run=not
getattr(args, "execute", False).
In `@cortex/tarball_helper.py`:
- Around line 622-752: track_installation and cleanup_installation currently
only update manual_builds.json and do not record actions in the centralized
history managed by _load_history/_save_history
(history_file/HISTORY_SCHEMA_VERSION); update both functions to load the history
via _load_history(), append an audit/event entry (e.g., add to a
history["audit"] or history["events"] list) containing at least operation
("install"/"cleanup"), name, timestamp (use datetime.utcnow().isoformat()), and
relevant details (packages_installed, files_installed, dry_run flag, package
removal decision), then persist the updated history by calling
_save_history(history); for cleanup_installation also log the decision to remove
packages and the final removal result so every tarball install/cleanup/track is
recorded in the history.db.
- Around line 450-458: In _analyze_python update the Dependency for
"python3-dev" so its dep_type is "package" instead of "tool" (i.e., change
Dependency(name="python3-dev", dep_type="tool", apt_package="python3-dev") to
use dep_type="package"); this ensures _check_installed() treats it as an apt
package rather than using shutil.which() and avoids false missing reports.
- Around line 532-569: Update install_dependencies to default to dry_run=True,
and move the shutil.which("apt-get") check to after the dry-run branch so dry
runs don’t require apt-get; before actually invoking apt-get prompt the user
with Confirm.ask() (same UX as cleanup_installation) and remove the "-y" flag
from the apt-get install command so installs require explicit confirmation; also
change cleanup_installation’s signature default to dry_run=True if not already,
and update CLI callsites for the "install-deps" and "cleanup" commands to use
dry_run=getattr(args, "dry_run", True) so both commands default to dry-run mode.
In `@tests/test_tarball_helper.py`:
- Around line 264-286: The tests that pass a non-empty package list should mock
shutil.which("apt-get") so the TarballHelper.install_dependencies flow doesn't
early-return on non-Debian CI; update test_install_deps_dry_run,
test_install_deps_success, and test_install_deps_failure to patch shutil.which
(e.g., patch("shutil.which", return_value="/usr/bin/apt-get") or similar) around
the call to helper.install_dependencies so the apt-get existence check in
TarballHelper.install_dependencies succeeds, leaving the subprocess.run mocks
and assertions unchanged.
♻️ Duplicate comments (1)
cortex/tarball_helper.py (1)
340-355: CMake dependency regex misses common package names and multi-module cases.
find_package()only captures\w+, so packages likegtk+-3.0orQt5-Coreare skipped;pkg_check_modules()also only captures a single module token. This can lead to incomplete dependency detection.🔧 Proposed fix
- for match in re.finditer(r"find_package\s*\(\s*(\w+)", content, re.IGNORECASE): + for match in re.finditer( + r"find_package\s*\(\s*([^\s\)]+)", content, re.IGNORECASE + ): pkg_name = match.group(1).lower() apt_pkg = self.PKGCONFIG_PACKAGES.get(pkg_name) analysis.dependencies.append( Dependency(name=pkg_name, dep_type="package", apt_package=apt_pkg) )Consider also extracting all module tokens from
pkg_check_modules(...)so multiple packages are not dropped.
🧹 Nitpick comments (1)
AAR-2026-01-12.md (1)
115-132: Test results validate the fixes.The test results demonstrate that all 1081 tests pass after the security fixes, with coverage meeting requirements. The documentation of test assertion updates (lines 126-131) is helpful for understanding the impact of safer build defaults.
📝 Optional: Add language tag to code block
The code block at lines 120-123 should specify a language for better rendering:
-``` +```text ======================== 1081 passed, 8 skipped in 173.29s ======================== Coverage: 59.57% (Required: 55%)</details> </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 2b20c2a1ba2546c2054df7e81f912c6db37d29a5 and d6b4f41810c8dcfcbaf684a24e0b46d12a4c61b0. </details> <details> <summary>📒 Files selected for processing (6)</summary> * `AAR-2026-01-12.md` * `REVIEW-ISSUES.md` * `cortex/cli.py` * `cortex/tarball_helper.py` * `tests/test_llm_router_unit.py` * `tests/test_tarball_helper.py` </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>📓 Path-based instructions (2)</summary> <details> <summary>**/*.py</summary> **📄 CodeRabbit inference engine (AGENTS.md)** > `**/*.py`: Follow PEP 8 style guide for Python code > Include type hints in Python code > Add docstrings for all public APIs in Python code > Use dry-run mode by default for all installation operations > Do not use silent sudo - require explicit user confirmation for privilege escalation > Implement Firejail sandboxing for execution of untrusted code > Log all operations to ~/.cortex/history.db for audit purposes Files: - `tests/test_tarball_helper.py` - `cortex/cli.py` - `tests/test_llm_router_unit.py` - `cortex/tarball_helper.py` </details> <details> <summary>tests/**/*.py</summary> **📄 CodeRabbit inference engine (AGENTS.md)** > Maintain test coverage above 80% for pull requests Files: - `tests/test_tarball_helper.py` - `tests/test_llm_router_unit.py` </details> </details><details> <summary>🧬 Code graph analysis (3)</summary> <details> <summary>tests/test_tarball_helper.py (1)</summary><blockquote> <details> <summary>cortex/tarball_helper.py (21)</summary> * `BuildAnalysis` (76-91) * `BuildSystem` (45-53) * `Dependency` (57-72) * `ManualInstall` (119-144) * `TarballHelper` (147-752) * `run_analyze_command` (755-849) * `run_cleanup_command` (909-917) * `run_install_deps_command` (852-880) * `run_list_command` (883-906) * `run_track_command` (920-936) * `detect_build_system` (226-256) * `analyze` (258-306) * `install_dependencies` (532-569) * `find_alternative` (571-620) * `track_installation` (622-641) * `list_installations` (643-670) * `cleanup_installation` (672-730) * `_analyze_autotools` (371-424) * `_load_history` (732-746) * `_analyze_cmake` (330-369) * `_analyze_meson` (426-448) </details> </blockquote></details> <details> <summary>cortex/cli.py (1)</summary><blockquote> <details> <summary>cortex/tarball_helper.py (5)</summary> * `run_analyze_command` (755-849) * `run_cleanup_command` (909-917) * `run_install_deps_command` (852-880) * `run_list_command` (883-906) * `run_track_command` (920-936) </details> </blockquote></details> <details> <summary>tests/test_llm_router_unit.py (1)</summary><blockquote> <details> <summary>cortex/llm_router.py (5)</summary> * `LLMProvider` (45-50) * `LLMResponse` (54-63) * `LLMRouter` (76-846) * `RoutingDecision` (67-73) * `TaskType` (32-42) </details> </blockquote></details> </details><details> <summary>🪛 LanguageTool</summary> <details> <summary>AAR-2026-01-12.md</summary> [style] ~12-~12: Consider using a different verb for a more formal wording. Context: ...6 MEDIUM, 11 LOW). All issues have been fixed and validated through automated testing... (FIX_RESOLVE) --- [uncategorized] ~67-~67: If this is a compound adjective that modifies the following noun, use a hyphen. Context: ...ory immediately before saving --- ### HIGH Priority Issues (11) | # | File | Issue | Fix |... (EN_COMPOUND_ADJECTIVE_INTERNAL) --- [uncategorized] ~85-~85: If this is a compound adjective that modifies the following noun, use a hyphen. Context: ...til.which("apt-get")` check | --- ### MEDIUM Priority Issues (16) | # | File | Issue | Fix |... (EN_COMPOUND_ADJECTIVE_INTERNAL) --- [uncategorized] ~108-~108: If this is a compound adjective that modifies the following noun, use a hyphen. Context: ...xed missing Raises sections | --- ### LOW Priority Issues (11) | # | File | Issue | Fix |... (EN_COMPOUND_ADJECTIVE_INTERNAL) </details> </details> <details> <summary>🪛 markdownlint-cli2 (0.18.1)</summary> <details> <summary>AAR-2026-01-12.md</summary> 120-120: Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> </details> </details> <details> <summary>🔇 Additional comments (8)</summary><blockquote> <details> <summary>REVIEW-ISSUES.md (1)</summary><blockquote> `1-231`: **Clear, well-structured review notes.** The issue list is organized and actionable; no changes needed. </blockquote></details> <details> <summary>tests/test_llm_router_unit.py (1)</summary><blockquote> `1-318`: **Solid unit coverage for router behavior.** Mocks and edge-case coverage look good. </blockquote></details> <details> <summary>cortex/tarball_helper.py (1)</summary><blockquote> `25-329`: **Good safety scaffolding and analysis flow.** Size limits, subprocess timeouts, and missing-package dedupe are solid and improve resilience. </blockquote></details> <details> <summary>tests/test_tarball_helper.py (1)</summary><blockquote> `27-260`: **Nice breadth of coverage for build-system detection and analysis.** The tests hit the key analyzers and command generators well. </blockquote></details> <details> <summary>AAR-2026-01-12.md (4)</summary><blockquote> `1-22`: **Good transparency in documenting the review process.** The executive summary clearly shows the scope of security improvements with concrete before/after metrics. The attribution of the reviewer (line 4) is transparent about AI assistance in the review process. The status declaration "COMPLETE - All issues fixed" (line 6) is important context for the detailed vulnerability disclosures that follow. --- `67-113`: **Well-structured issue documentation.** The categorization of HIGH/MEDIUM/LOW priority issues is clear and appropriate. The table format makes it easy to scan fixes, and the less-detailed approach for lower-priority issues (lines 112-113) is practical. --- `134-161`: **Clear summary of security improvements.** The files-modified table and security hardening summary effectively communicate the scope and impact of the changes. The before/after contrast (lines 148-160) clearly demonstrates the security improvements without excessive technical detail. --- `163-186`: **Practical recommendations for continued improvement.** The recommendations section provides actionable next steps for future security enhancements. The constants appendix (lines 174-186) serves as a helpful reference for the timeout and validation values introduced. The suggestion to consider `checkinstall` as a default (line 168) is particularly valuable for tracking manual installations. </blockquote></details> </blockquote></details> <sub>✏️ Tip: You can disable this entire section by setting `review_details` to `false` in your review settings.</sub> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| #### 1. Shell Injection in systemd Environment Variables (systemd_helper.py:464) | ||
| **Risk:** Remote code execution via malicious environment variable values | ||
| **Root Cause:** Missing `$`, backtick, and newline escaping in Environment= values | ||
| **Fix Applied:** | ||
| ```python | ||
| escaped_value = value.replace("\n", " ") | ||
| escaped_value = escaped_value.replace("\\", "\\\\") | ||
| escaped_value = escaped_value.replace("$", "\\$") | ||
| escaped_value = escaped_value.replace("`", "\\`") | ||
| escaped_value = escaped_value.replace('"', '\\"') | ||
| ``` | ||
|
|
||
| #### 2. Command Injection via apt-cache search (tarball_helper.py:475) | ||
| **Risk:** ReDoS, information disclosure via malicious directory names | ||
| **Root Cause:** Directory name used as regex without escaping | ||
| **Fix Applied:** | ||
| ```python | ||
| escaped_name = re.escape(name) | ||
| result = subprocess.run(["apt-cache", "search", f"^{escaped_name}$"], ...) | ||
| ``` | ||
|
|
||
| #### 3. No Timeout on apt-cache Calls (tarball_helper.py:475, 485) | ||
| **Risk:** Indefinite hang, resource exhaustion | ||
| **Root Cause:** subprocess.run() calls without timeout parameter | ||
| **Fix Applied:** Added `timeout=APT_CACHE_TIMEOUT` (10s) to all apt-cache calls | ||
|
|
||
| #### 4. Silent Data Corruption with errors='ignore' (tarball_helper.py:249, 294, 339) | ||
| **Risk:** Parsing failures, incorrect dependency detection | ||
| **Root Cause:** Non-UTF8 bytes silently dropped instead of replaced | ||
| **Fix Applied:** Changed `errors="ignore"` to `errors="replace"` throughout | ||
|
|
||
| #### 5. Race Condition in cleanup_installation (tarball_helper.py:564) | ||
| **Risk:** Data loss, concurrent modification issues | ||
| **Root Cause:** History modified between load and save | ||
| **Fix Applied:** Reload history immediately before saving |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for security-related documentation and policies
echo "=== Searching for security documentation files ==="
fd -i -e md security vulnerability || true
echo ""
echo "=== Checking for SECURITY.md or similar policies ==="
ls -la SECURITY.md SECURITY.txt .security* 2>/dev/null || echo "No SECURITY files found"
echo ""
echo "=== Examining the reviewed file context ==="
head -30 AAR-2026-01-12.md 2>/dev/null || echo "File not found"
echo ""
echo "=== Checking repository top-level documentation ==="
git ls-files | grep -i -E '(security|vulnerability|disclosure|readme)' | head -20 || trueRepository: cortexlinux/cortex
Length of output: 1749
🏁 Script executed:
#!/bin/bash
# Read the SECURITY.md file to understand security policies
echo "=== SECURITY.md Contents ==="
cat SECURITY.md
echo ""
echo "=== Checking git history for AAR file ==="
git log --oneline AAR-2026-01-12.md | head -5 || echo "File may be new"
echo ""
echo "=== Checking file permissions and publication status ==="
git ls-files | grep -E 'AAR.*md' || echo "File not tracked in git"
echo ""
echo "=== Examining README for security disclosure guidance ==="
grep -i -A 10 -B 2 "security\|vulnerability\|disclosure" README.md || echo "No security guidance in README"Repository: cortexlinux/cortex
Length of output: 3292
This detailed vulnerability documentation violates the established security policy.
The repository's SECURITY.md explicitly states "DO NOT open a public GitHub issue for security vulnerabilities" and directs all security disclosures to security@cortexlinux.com. However, this AAR file—committed to the public repository—documents 5 CRITICAL vulnerabilities with specific attack vectors, exploitation patterns, and vulnerable code locations.
Either:
- Remove detailed vulnerability information and replace with a summary (e.g., "5 CRITICAL issues in systemd integration and dependency handling—all fixed and tested"), or
- Clarify in SECURITY.md that post-fix security documentation is authorized and explain the disclosure rationale
If transparency about security fixes is the intent, the policy should explicitly authorize this practice to avoid confusion for future contributors.
🤖 Prompt for AI Agents
In `@AAR-2026-01-12.md` around lines 29 - 63, The AAR file AAR-2026-01-12.md
contains detailed vulnerability disclosures (lines describing
systemd_helper.py:464, tarball_helper.py:475/485/249/294/339/564 and exact
fixes) that violate the repo SECURITY.md; remove the detailed vulnerability
descriptions and either replace them with a brief summary statement like "5
CRITICAL issues in systemd integration and dependency handling—all fixed and
tested" or, alternatively, update SECURITY.md to explicitly permit post-fix
public disclosures and state the allowed level of detail; ensure you edit
AAR-2026-01-12.md to strip specific file/line references and exploit details
and, if choosing the policy change route, add a clear clause in SECURITY.md
authorizing publishing fixed vulnerabilities and the rationale so future
contributors know the policy.
| --- | ||
|
|
||
| **Report Generated:** 2026-01-12T$(date +%H:%M:%S) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete the report timestamp.
Line 190 contains an un-executed bash command substitution $(date +%H:%M:%S) that should have been filled in with the actual time the report was generated.
📝 Suggested fix
Replace the bash command with an actual timestamp, or remove the partial timestamp altogether:
-**Report Generated:** 2026-01-12T$(date +%H:%M:%S)
+**Report Generated:** 2026-01-12T14:30:00Or simply:
-**Report Generated:** 2026-01-12T$(date +%H:%M:%S)
+**Report Generated:** 2026-01-12📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --- | |
| **Report Generated:** 2026-01-12T$(date +%H:%M:%S) | |
| --- | |
| **Report Generated:** 2026-01-12 |
🤖 Prompt for AI Agents
In `@AAR-2026-01-12.md` around lines 188 - 190, The report header contains an
un-executed shell substitution "2026-01-12T$(date +%H:%M:%S)"; locate that exact
string in AAR-2026-01-12.md (the report timestamp line) and replace the "$(date
+%H:%M:%S)" portion with the actual generation time (e.g., 2026-01-12T14:23:05)
or remove the time fragment entirely so the line reads a complete static
timestamp.
| elif action == "install-deps": | ||
| try: | ||
| run_install_deps_command( | ||
| args.source_dir, | ||
| dry_run=getattr(args, "dry_run", False), | ||
| ) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "track": | ||
| try: | ||
| packages = [] | ||
| if hasattr(args, "packages") and args.packages: | ||
| # Strip whitespace from each package name | ||
| packages = [p.strip() for p in args.packages.split(",") if p.strip()] | ||
| run_track_command(args.name, args.source_dir, packages) | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "list": | ||
| try: | ||
| run_list_command() | ||
| return 0 | ||
| except Exception as e: | ||
| console.print(f"[red]Error: {e}[/red]") | ||
| return 1 | ||
|
|
||
| elif action == "cleanup": | ||
| try: | ||
| run_cleanup_command( | ||
| args.name, | ||
| dry_run=getattr(args, "dry_run", False), | ||
| ) | ||
| return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
wc -l cortex/cli.pyRepository: cortexlinux/cortex
Length of output: 80
🏁 Script executed:
sed -n '620,680p' cortex/cli.pyRepository: cortexlinux/cortex
Length of output: 2235
🏁 Script executed:
sed -n '2350,2410p' cortex/cli.pyRepository: cortexlinux/cortex
Length of output: 2898
🏁 Script executed:
# Find argument parser definitions for tarball subcommands
rg -n "tarball.*parser|--dry-run|--execute" cortex/cli.py | head -50Repository: cortexlinux/cortex
Length of output: 2972
Tarball install-deps and cleanup must default to dry-run mode per guidelines.
The current --dry-run flag defaults to false, making real changes execute unless explicitly disabled. Per coding guidelines, dry-run mode must be the default safe behavior. Switch to --execute flags and invert the logic to dry_run=not getattr(args, "execute", False) to align with project standards and safer default behavior.
🔧 Proposed fix
- run_install_deps_command(
- args.source_dir,
- dry_run=getattr(args, "dry_run", False),
- )
+ run_install_deps_command(
+ args.source_dir,
+ dry_run=not getattr(args, "execute", False),
+ )
@@
- run_cleanup_command(
- args.name,
- dry_run=getattr(args, "dry_run", False),
- )
+ run_cleanup_command(
+ args.name,
+ dry_run=not getattr(args, "execute", False),
+ )
@@
- tarball_deps_parser.add_argument(
- "--dry-run", action="store_true", help="Show what would be installed"
- )
+ tarball_deps_parser.add_argument(
+ "--execute", action="store_true", help="Install packages (default: dry-run)"
+ )
@@
- tarball_cleanup_parser.add_argument(
- "--dry-run", action="store_true", help="Show what would be removed"
- )
+ tarball_cleanup_parser.add_argument(
+ "--execute",
+ action="store_true",
+ help="Remove packages/history (default: dry-run)",
+ )🤖 Prompt for AI Agents
In `@cortex/cli.py` around lines 631 - 668, The install-deps and cleanup branches
currently pass dry_run=getattr(args, "dry_run", False) which makes destructive
behavior the default; change both to use an inverted execute flag so dry-run is
the default by passing dry_run=not getattr(args, "execute", False). Update the
"install-deps" branch where run_install_deps_command(...) is called and the
"cleanup" branch where run_cleanup_command(...) is called to use dry_run=not
getattr(args, "execute", False) and ensure any CLI option names are switched
from --dry-run to --execute elsewhere so the code reads dry_run=not
getattr(args, "execute", False).
| def _analyze_python(self, source_dir: Path, analysis: BuildAnalysis) -> None: | ||
| """Analyze Python build files for dependencies.""" | ||
| # Add python dev package | ||
| analysis.dependencies.append( | ||
| Dependency(name="python3-dev", dep_type="tool", apt_package="python3-dev") | ||
| ) | ||
| analysis.dependencies.append( | ||
| Dependency(name="pip", dep_type="tool", apt_package="python3-pip") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the file around the specified lines
cat -n cortex/tarball_helper.py | sed -n '440,470p'Repository: cortexlinux/cortex
Length of output: 1682
🏁 Script executed:
# Find the Dependency class definition
rg -A 10 "class Dependency" --type pyRepository: cortexlinux/cortex
Length of output: 3904
🏁 Script executed:
# Find the _check_installed method to understand dep_type handling
rg -B 2 -A 15 "def _check_installed" --type pyRepository: cortexlinux/cortex
Length of output: 1315
🏁 Script executed:
# Get the full _check_installed method
rg -A 30 "def _check_installed" cortex/tarball_helper.pyRepository: cortexlinux/cortex
Length of output: 1561
🏁 Script executed:
# Check what dep_type values are used in the file
rg 'dep_type="[^"]*"' cortex/tarball_helper.py | sort -uRepository: cortexlinux/cortex
Length of output: 1174
🏁 Script executed:
# Check the Dependency dataclass definition in tarball_helper.py
rg -B 5 -A 20 "@dataclass" cortex/tarball_helper.py | grep -A 20 "class Dependency"Repository: cortexlinux/cortex
Length of output: 692
Change python3-dev from dep_type="tool" to dep_type="package"
python3-dev is an apt package, not a PATH tool. With dep_type="tool", _check_installed() uses shutil.which() which will fail to find it, causing the dependency check to always report it as missing even when installed.
Proposed fix
analysis.dependencies.append(
- Dependency(name="python3-dev", dep_type="tool", apt_package="python3-dev")
+ Dependency(name="python3-dev", dep_type="package", apt_package="python3-dev")
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _analyze_python(self, source_dir: Path, analysis: BuildAnalysis) -> None: | |
| """Analyze Python build files for dependencies.""" | |
| # Add python dev package | |
| analysis.dependencies.append( | |
| Dependency(name="python3-dev", dep_type="tool", apt_package="python3-dev") | |
| ) | |
| analysis.dependencies.append( | |
| Dependency(name="pip", dep_type="tool", apt_package="python3-pip") | |
| ) | |
| def _analyze_python(self, source_dir: Path, analysis: BuildAnalysis) -> None: | |
| """Analyze Python build files for dependencies.""" | |
| # Add python dev package | |
| analysis.dependencies.append( | |
| Dependency(name="python3-dev", dep_type="package", apt_package="python3-dev") | |
| ) | |
| analysis.dependencies.append( | |
| Dependency(name="pip", dep_type="tool", apt_package="python3-pip") | |
| ) |
🤖 Prompt for AI Agents
In `@cortex/tarball_helper.py` around lines 450 - 458, In _analyze_python update
the Dependency for "python3-dev" so its dep_type is "package" instead of "tool"
(i.e., change Dependency(name="python3-dev", dep_type="tool",
apt_package="python3-dev") to use dep_type="package"); this ensures
_check_installed() treats it as an apt package rather than using shutil.which()
and avoids false missing reports.
| def install_dependencies(self, packages: list[str], dry_run: bool = False) -> bool: | ||
| """Install missing apt packages. | ||
| Args: | ||
| packages: List of apt packages to install. | ||
| dry_run: If True, just show what would be installed. | ||
| Returns: | ||
| True if installation succeeded (or dry_run), False otherwise. | ||
| """ | ||
| if not packages: | ||
| console.print("[green]No packages to install.[/green]") | ||
| return True | ||
|
|
||
| # Check if apt-get is available | ||
| if not shutil.which("apt-get"): | ||
| console.print( | ||
| "[red]Error: apt-get not found. " | ||
| "This tool requires Debian/Ubuntu or a compatible distribution.[/red]" | ||
| ) | ||
| return False | ||
|
|
||
| if dry_run: | ||
| console.print("[bold]Would install:[/bold]") | ||
| for pkg in packages: | ||
| console.print(f" - {pkg}") | ||
| return True | ||
|
|
||
| console.print(f"[bold]Installing {len(packages)} packages...[/bold]") | ||
| cmd = ["sudo", "apt-get", "install", "-y"] + packages | ||
|
|
||
| result = subprocess.run(cmd) | ||
| if result.returncode != 0: | ||
| console.print( | ||
| "[red]Package installation failed. Check the output above for details.[/red]" | ||
| ) | ||
| return False | ||
| return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /repo && find . -name "tarball_helper.py" -type f | head -20Repository: cortexlinux/cortex
Length of output: 117
🏁 Script executed:
cd /repo && rg "def install_dependencies" -A 50 -B 2Repository: cortexlinux/cortex
Length of output: 117
🏁 Script executed:
cd /repo && rg "cleanup_installation" -B 2 -A 20Repository: cortexlinux/cortex
Length of output: 117
🏁 Script executed:
cd /repo && rg "Confirm\." --type py | head -20Repository: cortexlinux/cortex
Length of output: 117
🏁 Script executed:
cd /repo && rg "def install_dependencies|dry_run.*=|install.*--" cortex/ --type py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 117
🏁 Script executed:
pwd && git ls-files | grep -E "tarball_helper|AGENTS" | head -20Repository: cortexlinux/cortex
Length of output: 144
🏁 Script executed:
git ls-files "**/*.py" | grep -i tarball | head -10Repository: cortexlinux/cortex
Length of output: 115
🏁 Script executed:
rg "def install_dependencies" -A 50 -B 2 -t pyRepository: cortexlinux/cortex
Length of output: 3180
🏁 Script executed:
rg "def cleanup_installation" -A 40 -B 2 -t pyRepository: cortexlinux/cortex
Length of output: 2693
🏁 Script executed:
rg "Confirm\." -t py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 1207
🏁 Script executed:
rg "install_dependencies\(" -B 3 -A 3 -t pyRepository: cortexlinux/cortex
Length of output: 2515
🏁 Script executed:
rg "from rich" -t py | grep -i confirmRepository: cortexlinux/cortex
Length of output: 126
🏁 Script executed:
rg "dry_run" cortex/tarball_helper.py -B 2 -A 2 | head -50Repository: cortexlinux/cortex
Length of output: 1457
🏁 Script executed:
rg "def run_" cortex/tarball_helper.py -A 20 | grep -A 20 "run_list"Repository: cortexlinux/cortex
Length of output: 735
🏁 Script executed:
git ls-files "**/*.py" | xargs rg "install_dependencies|cleanup_installation" --type py | grep -E "(argparse|click|parser|--dry)" | head -20Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
rg "argparse|add_argument.*dry" -t py -B 2 -A 2Repository: cortexlinux/cortex
Length of output: 17986
🏁 Script executed:
rg "run_install_deps_command|run_cleanup_command" -B 5 -A 15 -t pyRepository: cortexlinux/cortex
Length of output: 12906
🏁 Script executed:
cat cortex/tarball_helper.py | tail -100Repository: cortexlinux/cortex
Length of output: 3064
🏁 Script executed:
rg "install-deps" cortex/cli.py -B 5 -A 20Repository: cortexlinux/cortex
Length of output: 3312
🏁 Script executed:
rg "cleanup" cortex/cli.py -B 5 -A 20 | grep -A 20 "tarball_action.*cleanup"Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
rg "tarball_cleanup_parser|cleanup.*add_parser" cortex/cli.py -A 5Repository: cortexlinux/cortex
Length of output: 847
Change dry-run defaults to true and add user confirmation for sudo installs.
install_dependencies defaults to dry_run=False and runs sudo apt-get install -y without confirmation, violating the dry-run-first principle and requiring explicit privilege escalation approval. Additionally, it checks for apt-get even during dry-run, which is inefficient.
cleanup_installation also defaults to dry_run=False but already includes Confirm.ask() for package removal. Both functions and their CLI entry points should default to dry-run mode.
Update:
- Change
install_dependencies(packages: list[str], dry_run: bool = False)→dry_run: bool = True - Move apt-get check after the dry-run return to avoid unnecessary checks
- Add
Confirm.ask()before running sudo install (matching the pattern already used incleanup_installation) - Remove the
-yflag from apt-get install to prevent silent installs - Change CLI defaults:
dry_run=getattr(args, "dry_run", False)→dry_run=getattr(args, "dry_run", True)for bothinstall-depsandcleanupcommands
🤖 Prompt for AI Agents
In `@cortex/tarball_helper.py` around lines 532 - 569, Update install_dependencies
to default to dry_run=True, and move the shutil.which("apt-get") check to after
the dry-run branch so dry runs don’t require apt-get; before actually invoking
apt-get prompt the user with Confirm.ask() (same UX as cleanup_installation) and
remove the "-y" flag from the apt-get install command so installs require
explicit confirmation; also change cleanup_installation’s signature default to
dry_run=True if not already, and update CLI callsites for the "install-deps" and
"cleanup" commands to use dry_run=getattr(args, "dry_run", True) so both
commands default to dry-run mode.
| def track_installation(self, install: ManualInstall) -> None: | ||
| """Track a manual installation. | ||
| Args: | ||
| install: ManualInstall record to save. | ||
| """ | ||
| history = self._load_history() | ||
|
|
||
| # Ensure installations dict exists | ||
| if "installations" not in history: | ||
| history["installations"] = {} | ||
|
|
||
| history["installations"][install.name] = { | ||
| "source_dir": install.source_dir, | ||
| "installed_at": install.installed_at, | ||
| "packages_installed": install.packages_installed, | ||
| "files_installed": install.files_installed, | ||
| "prefix": install.prefix, | ||
| } | ||
| self._save_history(history) | ||
|
|
||
| def list_installations(self) -> list[ManualInstall]: | ||
| """List all tracked manual installations. | ||
| Returns: | ||
| List of ManualInstall records. | ||
| """ | ||
| history = self._load_history() | ||
| installations_data = history.get("installations", history) | ||
|
|
||
| # Handle old format (no wrapper) for backwards compatibility | ||
| if "_version" in installations_data: | ||
| installations_data = history.get("installations", {}) | ||
|
|
||
| installations = [] | ||
| for name, data in installations_data.items(): | ||
| if name.startswith("_"): # Skip metadata keys | ||
| continue | ||
| installations.append( | ||
| ManualInstall( | ||
| name=name, | ||
| source_dir=data.get("source_dir", ""), | ||
| installed_at=data.get("installed_at", ""), | ||
| packages_installed=data.get("packages_installed", []), | ||
| files_installed=data.get("files_installed", []), | ||
| prefix=data.get("prefix", "/usr/local"), | ||
| ) | ||
| ) | ||
| return installations | ||
|
|
||
| def cleanup_installation(self, name: str, dry_run: bool = False) -> bool: | ||
| """Remove a tracked manual installation. | ||
| Args: | ||
| name: Name of the installation to remove. | ||
| dry_run: If True, just show what would be removed. | ||
| Returns: | ||
| True if cleanup succeeded, False otherwise. | ||
| """ | ||
| history = self._load_history() | ||
| installations = history.get("installations", history) | ||
|
|
||
| # Handle old format | ||
| if "_version" in installations: | ||
| installations = history.get("installations", {}) | ||
|
|
||
| if name not in installations: | ||
| console.print(f"[red]Installation '{name}' not found in history.[/red]") | ||
| return False | ||
|
|
||
| data = installations[name] | ||
| packages = data.get("packages_installed", []) | ||
|
|
||
| if dry_run: | ||
| console.print(f"[bold]Would remove installation: {name}[/bold]") | ||
| if packages: | ||
| console.print("Packages that were installed:") | ||
| for pkg in packages: | ||
| console.print(f" - {pkg}") | ||
| return True | ||
|
|
||
| # Handle package removal first (before removing from history) | ||
| if packages: | ||
| remove_pkgs = Confirm.ask( | ||
| f"Remove {len(packages)} packages that were installed for this build?" | ||
| ) | ||
| if remove_pkgs: | ||
| if shutil.which("apt-get"): | ||
| cmd = ["sudo", "apt-get", "remove", "-y"] + packages | ||
| subprocess.run(cmd) | ||
| else: | ||
| console.print("[yellow]apt-get not found, skipping package removal[/yellow]") | ||
|
|
||
| # Reload history to avoid race condition, then remove | ||
| history = self._load_history() | ||
| installations = history.get("installations", history) | ||
| if "_version" in installations: | ||
| installations = history.get("installations", {}) | ||
|
|
||
| if name in installations: | ||
| del installations[name] | ||
| if "installations" in history: | ||
| history["installations"] = installations | ||
| self._save_history(history) | ||
|
|
||
| console.print(f"[green]Removed '{name}' from tracking.[/green]") | ||
|
|
||
| return True | ||
|
|
||
| def _load_history(self) -> dict: | ||
| """Load installation history from file.""" | ||
| if not self.history_file.exists(): | ||
| return {"_version": HISTORY_SCHEMA_VERSION, "installations": {}} | ||
| try: | ||
| data = json.loads(self.history_file.read_text(encoding="utf-8")) | ||
| # Migrate old format if needed | ||
| if "_version" not in data: | ||
| return { | ||
| "_version": HISTORY_SCHEMA_VERSION, | ||
| "installations": data, | ||
| } | ||
| return data | ||
| except (json.JSONDecodeError, OSError): | ||
| return {"_version": HISTORY_SCHEMA_VERSION, "installations": {}} | ||
|
|
||
| def _save_history(self, history: dict) -> None: | ||
| """Save installation history to file.""" | ||
| # Ensure version is set | ||
| history["_version"] = HISTORY_SCHEMA_VERSION | ||
| self.history_file.write_text(json.dumps(history, indent=2), encoding="utf-8") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tarball operations aren’t audited in ~/.cortex/history.db.
Tracking and cleanup only update ~/.cortex/manual_builds.json, but installs/removals aren’t recorded in the audit log. Please log tarball install/cleanup/track operations to ~/.cortex/history.db as well. As per coding guidelines, ...
🤖 Prompt for AI Agents
In `@cortex/tarball_helper.py` around lines 622 - 752, track_installation and
cleanup_installation currently only update manual_builds.json and do not record
actions in the centralized history managed by _load_history/_save_history
(history_file/HISTORY_SCHEMA_VERSION); update both functions to load the history
via _load_history(), append an audit/event entry (e.g., add to a
history["audit"] or history["events"] list) containing at least operation
("install"/"cleanup"), name, timestamp (use datetime.utcnow().isoformat()), and
relevant details (packages_installed, files_installed, dry_run flag, package
removal decision), then persist the updated history by calling
_save_history(history); for cleanup_installation also log the decision to remove
packages and the final removal result so every tarball install/cleanup/track is
recorded in the history.db.
| def test_install_deps_dry_run(self): | ||
| helper = TarballHelper() | ||
| result = helper.install_dependencies(["zlib1g-dev", "libssl-dev"], dry_run=True) | ||
| assert result is True | ||
|
|
||
| def test_install_deps_empty_list(self): | ||
| helper = TarballHelper() | ||
| result = helper.install_dependencies([]) | ||
| assert result is True | ||
|
|
||
| def test_install_deps_success(self): | ||
| helper = TarballHelper() | ||
| mock_result = MagicMock(returncode=0) | ||
| with patch("subprocess.run", return_value=mock_result): | ||
| result = helper.install_dependencies(["zlib1g-dev"]) | ||
| assert result is True | ||
|
|
||
| def test_install_deps_failure(self): | ||
| helper = TarballHelper() | ||
| mock_result = MagicMock(returncode=1) | ||
| with patch("subprocess.run", return_value=mock_result): | ||
| result = helper.install_dependencies(["nonexistent-pkg"]) | ||
| assert result is False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the TarballHelper class definition
find . -type f -name "*.py" | xargs grep -l "class TarballHelper" | head -5Repository: cortexlinux/cortex
Length of output: 88
🏁 Script executed:
# Also search for install_dependencies method definition
rg "def install_dependencies" --type py -A 15Repository: cortexlinux/cortex
Length of output: 1027
🏁 Script executed:
cat -n cortex/tarball_helper.py | grep -A 30 "def install_dependencies"Repository: cortexlinux/cortex
Length of output: 1368
🏁 Script executed:
# Also read the test file to see current state
cat -n tests/test_tarball_helper.py | sed -n '264,286p'Repository: cortexlinux/cortex
Length of output: 1138
🏁 Script executed:
head -30 cortex/tarball_helper.py | grep -E "^import|^from"Repository: cortexlinux/cortex
Length of output: 498
🏁 Script executed:
head -30 tests/test_tarball_helper.py | grep -E "^import|^from"Repository: cortexlinux/cortex
Length of output: 209
🏁 Script executed:
# Check if shutil is used anywhere else in the tests or tarball_helper
rg "shutil\." cortex/tarball_helper.py tests/test_tarball_helper.pyRepository: cortexlinux/cortex
Length of output: 1046
🏁 Script executed:
cat -n cortex/tarball_helper.py | sed -n '532,565p'Repository: cortexlinux/cortex
Length of output: 1488
🏁 Script executed:
# Check if test_install_deps_dry_run would pass without shutil.which mock
# by looking at the flow: packages check -> shutil.which check -> dry_run check
rg "def test_install_deps" tests/test_tarball_helper.py -A 5Repository: cortexlinux/cortex
Length of output: 979
Mock shutil.which("apt-get") in all non-empty dependency tests to prevent failures on non-Debian CI systems.
install_dependencies() checks shutil.which("apt-get") at line 547 before any subprocess.run call. Without mocking this, tests fail on non-Debian systems because the check returns early with False. This affects test_install_deps_dry_run, test_install_deps_success, and test_install_deps_failure (which all pass non-empty package lists). Only test_install_deps_empty_list is unaffected since it returns before reaching the shutil.which check.
🔧 Proposed fix
def test_install_deps_dry_run(self):
helper = TarballHelper()
- result = helper.install_dependencies(["zlib1g-dev", "libssl-dev"], dry_run=True)
+ with patch("shutil.which", return_value="/usr/bin/apt-get"):
+ result = helper.install_dependencies(["zlib1g-dev", "libssl-dev"], dry_run=True)
assert result is True
def test_install_deps_success(self):
helper = TarballHelper()
mock_result = MagicMock(returncode=0)
- with patch("subprocess.run", return_value=mock_result):
- result = helper.install_dependencies(["zlib1g-dev"])
+ with patch("shutil.which", return_value="/usr/bin/apt-get"):
+ with patch("subprocess.run", return_value=mock_result):
+ result = helper.install_dependencies(["zlib1g-dev"])
assert result is True
def test_install_deps_failure(self):
helper = TarballHelper()
mock_result = MagicMock(returncode=1)
- with patch("subprocess.run", return_value=mock_result):
- result = helper.install_dependencies(["nonexistent-pkg"])
+ with patch("shutil.which", return_value="/usr/bin/apt-get"):
+ with patch("subprocess.run", return_value=mock_result):
+ result = helper.install_dependencies(["nonexistent-pkg"])
assert result is False📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_install_deps_dry_run(self): | |
| helper = TarballHelper() | |
| result = helper.install_dependencies(["zlib1g-dev", "libssl-dev"], dry_run=True) | |
| assert result is True | |
| def test_install_deps_empty_list(self): | |
| helper = TarballHelper() | |
| result = helper.install_dependencies([]) | |
| assert result is True | |
| def test_install_deps_success(self): | |
| helper = TarballHelper() | |
| mock_result = MagicMock(returncode=0) | |
| with patch("subprocess.run", return_value=mock_result): | |
| result = helper.install_dependencies(["zlib1g-dev"]) | |
| assert result is True | |
| def test_install_deps_failure(self): | |
| helper = TarballHelper() | |
| mock_result = MagicMock(returncode=1) | |
| with patch("subprocess.run", return_value=mock_result): | |
| result = helper.install_dependencies(["nonexistent-pkg"]) | |
| assert result is False | |
| def test_install_deps_dry_run(self): | |
| helper = TarballHelper() | |
| with patch("shutil.which", return_value="/usr/bin/apt-get"): | |
| result = helper.install_dependencies(["zlib1g-dev", "libssl-dev"], dry_run=True) | |
| assert result is True | |
| def test_install_deps_empty_list(self): | |
| helper = TarballHelper() | |
| result = helper.install_dependencies([]) | |
| assert result is True | |
| def test_install_deps_success(self): | |
| helper = TarballHelper() | |
| mock_result = MagicMock(returncode=0) | |
| with patch("shutil.which", return_value="/usr/bin/apt-get"): | |
| with patch("subprocess.run", return_value=mock_result): | |
| result = helper.install_dependencies(["zlib1g-dev"]) | |
| assert result is True | |
| def test_install_deps_failure(self): | |
| helper = TarballHelper() | |
| mock_result = MagicMock(returncode=1) | |
| with patch("shutil.which", return_value="/usr/bin/apt-get"): | |
| with patch("subprocess.run", return_value=mock_result): | |
| result = helper.install_dependencies(["nonexistent-pkg"]) | |
| assert result is False |
🤖 Prompt for AI Agents
In `@tests/test_tarball_helper.py` around lines 264 - 286, The tests that pass a
non-empty package list should mock shutil.which("apt-get") so the
TarballHelper.install_dependencies flow doesn't early-return on non-Debian CI;
update test_install_deps_dry_run, test_install_deps_success, and
test_install_deps_failure to patch shutil.which (e.g., patch("shutil.which",
return_value="/usr/bin/apt-get") or similar) around the call to
helper.install_dependencies so the apt-get existence check in
TarballHelper.install_dependencies succeeds, leaving the subprocess.run mocks
and assertions unchanged.




Summary
Implements the Tarball Helper requested in #452, providing 4 key features:
Features
find_package(),pkg_check_modules(),dependency(),AC_CHECK_*~/.cortex/manual_builds.jsonDemo
Test Coverage
Usage
AI Disclosure
This implementation was developed with assistance from Claude (Anthropic). The AI helped with:
All code was reviewed and the tests verify correct functionality.
Fixes #452
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.