-
Notifications
You must be signed in to change notification settings - Fork 2
1.8.4 #224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
1.8.4 #224
Conversation
📝 WalkthroughWalkthroughVersion bump from v1.8.3 to v1.8.4 across build artifacts and release configurations. HTTP client timeouts increased from 15 to 60 seconds. CI workflow refactored with CodeQL upgrade, Gitleaks Docker conversion, new link validation job, and docs build job. dbt configuration updated with table materialization setting. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
docs/Infrastructure_Guide/grafana.md (2)
14-17: Minor grammar improvement needed.Consider hyphenating "nice to have" to "nice-to-have" on line 16 for proper compound adjective formation.
🔎 Proposed fix
-but a nice to have to view more details about the database's performance. The free tier on Grafana Cloud is +but a nice-to-have to view more details about the database's performance. The free tier on Grafana Cloud is
24-24: Hyphenate compound adjective.Consider hyphenating "left side menu" to "left-side menu" for consistency. This pattern appears on lines 24, 39, and 56.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
docs/assets/grafana-dashboard-example.pngis excluded by!**/*.png
📒 Files selected for processing (13)
.github/workflows/ci.yml.goreleaser.ymlDockerfileREADME.mdcard_data/pipelines/poke_cli_dbt/dbt_project.ymlcard_data/pipelines/tests/extract_sets_test.pycmd/card/cardinfo.gocmd/card/cardlist.gocmd/card/setslist.godocs/Infrastructure_Guide/grafana.mdlychee.tomlnfpm.yamltestdata/main_latest_flag.golden
🧰 Additional context used
🧬 Code graph analysis (1)
card_data/pipelines/tests/extract_sets_test.py (1)
card_data/pipelines/defs/extract/tcgdex/extract_sets.py (1)
extract_sets_data(22-56)
🪛 LanguageTool
docs/Infrastructure_Guide/grafana.md
[grammar] ~16-~16: Use a hyphen to join words.
Context: ...not required for the project, but a nice to have to view more details about the d...
(QB_NEW_EN_HYPHEN)
[grammar] ~16-~16: Use a hyphen to join words.
Context: ... required for the project, but a nice to have to view more details about the data...
(QB_NEW_EN_HYPHEN)
[grammar] ~24-~24: Use a hyphen to join words.
Context: ...point._ 1. On the homepage, on the left side menu, click on the drop-down for co...
(QB_NEW_EN_HYPHEN)
[grammar] ~39-~39: Use a hyphen to join words.
Context: ...ted via the Grafana UI._ 1. On the left side menu, click on Dashboards. 2. I...
(QB_NEW_EN_HYPHEN)
[style] ~48-~48: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 2659 characters long)
Context: .../assets/grafana-dashboard-example.png) !!! warning The uid for will need to...
(EN_EXCESSIVE_EXCLAMATION)
[grammar] ~56-~56: Use a hyphen to join words.
Context: ...rometheus data source. * On the left side menu, click on the Connections ...
(QB_NEW_EN_HYPHEN)
[grammar] ~57-~57: Ensure spelling is correct
Context: ...or "prom" and then click on the default Promethesus data source. * In the URL of that p...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: tests
- GitHub Check: benchmarks
🔇 Additional comments (21)
.goreleaser.yml (1)
17-17: LGTM!Version bump to v1.8.4 aligns with the PR objective and matches updates across other build artifacts.
README.md (1)
5-5: LGTM!Docker image references updated to v1.8.4, aligning with the version bump across the repository.
Also applies to: 99-99, 103-103
nfpm.yaml (1)
4-4: LGTM!Package version updated to v1.8.4, consistent with the release version bump.
Dockerfile (1)
11-11: LGTM!Build-time version flag updated to v1.8.4, matching the release configuration in
.goreleaser.ymland other artifacts.cmd/card/cardlist.go (1)
175-175: Verify the rationale for the 4x timeout increase.The HTTP client timeout increased from 15 seconds to 60 seconds. While this change is applied consistently across card data fetch operations (also in
setslist.goandcardinfo.go), a 60-second timeout is quite long for an interactive CLI tool and may degrade user experience.Please confirm:
- Is there a specific performance issue with the Supabase endpoint that necessitates this increase?
- Have you considered alternative approaches such as response streaming, pagination, or query optimization?
- Should there be user feedback (progress indicator) for operations that may take this long?
cmd/card/setslist.go (1)
125-125: Timeout increased consistently with other card data fetchers.HTTP client timeout increased from 15 to 60 seconds, matching the change in
cardlist.goandcardinfo.go. See the comment oncardlist.goregarding verification of this timeout increase.cmd/card/cardinfo.go (2)
25-25: Timeout increased consistently with other card data fetchers.HTTP client timeout increased from 15 to 60 seconds, matching changes in
cardlist.goandsetslist.go. See the comment oncardlist.gofor verification of this timeout increase.
41-48: Decode logic refactored to avoid timeout during image processing.The image response is now read into memory (up to 10MB limit) before decoding, preventing slow decode operations from triggering HTTP timeouts. This is a reasonable approach for a CLI tool fetching card images, as it separates network I/O concerns from image processing time. The added error handling for
io.ReadAllis appropriate.testdata/main_latest_flag.golden (1)
1-7: Golden file is correct and does not need updating.The golden file correctly shows
v1.8.3, which is the actual latest release on GitHub. The--latestflag queries the GitHub API dynamically at runtime, so the golden file captures the expected output based on GitHub's current state. Although the PR bumps version references tov1.8.4in internal build files (nfpm.yaml, Dockerfile, etc.), that version has not yet been released on GitHub. Oncev1.8.4is released, the test will automatically reflect the new version since it queries GitHub dynamically.Likely an incorrect or invalid review comment.
card_data/pipelines/poke_cli_dbt/dbt_project.yml (3)
2-2: LGTM! Version bump is consistent.The version bump to 1.8.4 aligns with the PR objectives and matches the version updates across other files in this release.
18-21: Table materialization configured.The addition of
+materialized: tablesets the default materialization strategy for all models in thepoke_cli_dbtproject to create physical tables. This is appropriate for production data that needs to be queried frequently.
23-24: Inconsistency with AI summary.The AI summary claims the
on-run-endaction was removed, but it's still present in the code. The code showscreate_relationships()is still configured to run after dbt executions.card_data/pipelines/tests/extract_sets_test.py (2)
12-69: Fixture structure looks good.The mock API response fixture provides comprehensive test data covering multiple series, null values, and edge cases.
72-168: Good test coverage.The tests cover important scenarios including successful extraction, empty sets, and null card counts. The assertions validate both DataFrame structure and content appropriately.
lychee.toml (1)
1-9: LGTM! Reasonable link validation configuration.The Lychee configuration is appropriate for the new link validation job. The 30-second timeout and acceptance of 429 (rate limit) responses are sensible defaults for checking documentation links.
.github/workflows/ci.yml (6)
34-34: LGTM! Version bump is consistent.The version bump to v1.8.4 aligns with the PR objectives and matches updates across other files.
71-82: Gitleaks refactored to Docker-based execution.The switch to running Gitleaks via Docker container is a good improvement for consistency and reproducibility. The
if: always()condition on the SARIF upload ensures results are reported even when secrets are detected.
252-273: Good addition: link validation job.The new
validate-linksjob adds automated checking for broken links in documentation. Thefail: falsesetting is appropriate for a newly introduced check, preventing it from blocking builds while the team addresses any existing broken links.
275-308: Well-structured docs build job.The new
build-docs-docker-imagejob is properly configured with:
- Dependency on link validation ensuring docs quality
- Sparse checkout for efficiency
- Standard Docker build and export pattern
247-250: Inconsistency with AI summary.The AI summary claims emoji were removed from these messages, but both lines still contain emoji (
⚠️ and ✅). The current implementation is fine and provides visual clarity.
58-60: CodeQL action v4 upgrade requires verification of SARIF compliance.The upgrade from v3 to v4 introduces a breaking change: v4 rejects uploads containing multiple SARIF runs with the same tool+category (where v3 only warned). This enforcement is effective July 22, 2025.
Verify that the
results.sarifoutput complies with v4's requirement (each run must have a distinct tool or category). If the workflow uploads SARIF from multiple tools under the same category, the upload will fail. CodeQL-only runs are typically compatible, but confirm there are no other tools contributing to the same SARIF file, or ensure each has a distinct category.
| @pytest.fixture | ||
| def mock_api_response(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Fixture name mismatch.
The fixture is named mock_api_response but the tests on lines 74 and 105 reference mock_series_responses. This will cause pytest to fail with a "fixture not found" error.
🔎 Proposed fix
@pytest.fixture
-def mock_api_response():
+def mock_series_responses():
"""Sample API responses matching tcgdex series format with sets"""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.fixture | |
| def mock_api_response(): | |
| @pytest.fixture | |
| def mock_series_responses(): | |
| """Sample API responses matching tcgdex series format with sets""" |
🤖 Prompt for AI Agents
In card_data/pipelines/tests/extract_sets_test.py around lines 12 to 13, the
pytest fixture is defined as mock_api_response but tests at lines ~74 and ~105
reference mock_series_responses causing a "fixture not found" error; fix by
renaming the fixture to mock_series_responses (or update the test references to
mock_api_response) so names match, and ensure any import/usage elsewhere in the
file uses the chosen name consistently.
| !!! warning | ||
|
|
||
| The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard | ||
| will have errors on each visual. To fix this, perform the steps below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo: "souce" → "source".
Line 51 contains a spelling error: "data souce" should be "data source".
🔎 Proposed fix
- The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard
+ The `uid` for will need to be updated manually if the prompt for choosing a data source does not appear. If so, the dashboard📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| !!! warning | |
| The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard | |
| will have errors on each visual. To fix this, perform the steps below. | |
| !!! warning | |
| The `uid` for will need to be updated manually if the prompt for choosing a data source does not appear. If so, the dashboard | |
| will have errors on each visual. To fix this, perform the steps below. |
🤖 Prompt for AI Agents
In docs/Infrastructure_Guide/grafana.md around lines 49 to 52, fix the spelling
typo "data souce" to "data source" in the warning paragraph; update the sentence
to read "...if the prompt for choosing a data source does not appear." and
ensure surrounding punctuation/grammar remains correct.
| 1. Grab the `uid` for the Prometheus data source. | ||
| * On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**. | ||
| * Search for "prom" and then click on the default Promethesus data source. | ||
| * In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo: "Promethesus" → "Prometheus".
Line 57 contains a spelling error in the product name.
🔎 Proposed fix
- * Search for "prom" and then click on the default Promethesus data source.
+ * Search for "prom" and then click on the default Prometheus data source.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Grab the `uid` for the Prometheus data source. | |
| * On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**. | |
| * Search for "prom" and then click on the default Promethesus data source. | |
| * In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`. | |
| 1. Grab the `uid` for the Prometheus data source. | |
| * On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**. | |
| * Search for "prom" and then click on the default Prometheus data source. | |
| * In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`. |
🧰 Tools
🪛 LanguageTool
[grammar] ~56-~56: Use a hyphen to join words.
Context: ...rometheus data source. * On the left side menu, click on the Connections ...
(QB_NEW_EN_HYPHEN)
[grammar] ~57-~57: Ensure spelling is correct
Context: ...or "prom" and then click on the default Promethesus data source. * In the URL of that p...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
docs/Infrastructure_Guide/grafana.md around lines 55 to 58: there's a
misspelling of "Prometheus" written as "Promethesus" on line 57; fix the typo by
replacing "Promethesus" with the correct product name "Prometheus" so the
sentence reads "click on the default Prometheus data source" and update any
other occurrences in this block if present.
Summary by CodeRabbit
Documentation
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.