Skip to content

Conversation

@digitalghost-dev
Copy link
Owner

@digitalghost-dev digitalghost-dev commented Dec 31, 2025

Summary by CodeRabbit

  • Documentation

    • New Grafana Cloud monitoring guide with setup instructions for tracking PostgreSQL metrics.
  • Improvements

    • Extended HTTP client timeouts for improved stability with slower connections.
    • Enhanced CI/CD pipeline with Docker-based builds and automated link validation.
    • Upgraded security scanning tools.
  • Chores

    • Version bumped to v1.8.4 across all components.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 31, 2025

📝 Walkthrough

Walkthrough

Version bump from v1.8.3 to v1.8.4 across build artifacts and release configurations. HTTP client timeouts increased from 15 to 60 seconds. CI workflow refactored with CodeQL upgrade, Gitleaks Docker conversion, new link validation job, and docs build job. dbt configuration updated with table materialization setting.

Changes

Cohort / File(s) Summary
Version updates (v1.8.3 → v1.8.4)
.goreleaser.yml, Dockerfile, README.md, nfpm.yaml, testdata/main_latest_flag.golden
Version string bumped consistently across release and build configurations; README examples updated to reference v1.8.4
CI/CD workflow enhancements
.github/workflows/ci.yml
CodeQL action upgraded v3→v4; Gitleaks migrated from step-based to Docker-based execution; new validate-links job with Lychee integration added as dependency for build-docs-docker-image job; updated artifact names and upload conditions
HTTP client timeout increases (15s → 60s)
cmd/card/cardinfo.go, cmd/card/cardlist.go, cmd/card/setslist.go
Timeout extended across three card-related HTTP clients; cardinfo.go also refactored image decoding to read full body before decode with intermediate error handling
dbt project configuration
card_data/pipelines/poke_cli_dbt/dbt_project.yml
Version bumped to 1.8.4; added models.poke_cli_dbt.materialized: table setting; removed on-run-end hook for relationship creation
Test infrastructure
card_data/pipelines/tests/extract_sets_test.py
New pytest module for extract_sets_data function validation; includes mock API responses fixture and benchmark-marked tests covering DataFrame structure, null handling, empty series, and logo/symbol resilience
Documentation & link validation tooling
docs/Infrastructure_Guide/grafana.md, lychee.toml
New Grafana Cloud monitoring guide for Supabase PostgreSQL; new link checker configuration with 30s timeout, 2 threads, and info-level logging

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

  • 1.7.4 #200: Overlapping version/release artifact updates (VERSION_NUMBER in ci workflow and main.version in build configurations)
  • 1.7.3 #196: Identical changes to ci workflow version, goreleaser, Dockerfile, nfpm.yaml, and dbt_project.yml (version bump and materialization config)
  • 1.8.1 #211: Shared dbt_project.yml modifications (materialized table setting and on-run-end hook removal)

Poem

🐰 A version hop from three-point-eight,
With workflows new and timeouts great—
Link validation keeps things neat,
Grafana dashboards now complete! 📊

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.14% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title '1.8.4' is vague and does not clearly convey what changes are being made; it only references a version number without describing the actual modifications. Use a descriptive title that summarizes the main change, such as 'Bump version to 1.8.4 and update CI/CD pipeline' or similar, to clearly communicate the purpose of the changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Dec 31, 2025

Codecov Report

❌ Patch coverage is 57.14286% with 3 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
cmd/card/cardinfo.go 60.00% 1 Missing and 1 partial ⚠️
cmd/card/setslist.go 0.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
cmd/card/cardlist.go 90.47% <100.00%> (ø)
cmd/card/setslist.go 32.43% <0.00%> (ø)
cmd/card/cardinfo.go 73.33% <60.00%> (-4.45%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
docs/Infrastructure_Guide/grafana.md (2)

14-17: Minor grammar improvement needed.

Consider hyphenating "nice to have" to "nice-to-have" on line 16 for proper compound adjective formation.

🔎 Proposed fix
-but a nice to have to view more details about the database's performance. The free tier on Grafana Cloud is
+but a nice-to-have to view more details about the database's performance. The free tier on Grafana Cloud is

24-24: Hyphenate compound adjective.

Consider hyphenating "left side menu" to "left-side menu" for consistency. This pattern appears on lines 24, 39, and 56.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 975eb97 and afdb4bc.

⛔ Files ignored due to path filters (1)
  • docs/assets/grafana-dashboard-example.png is excluded by !**/*.png
📒 Files selected for processing (13)
  • .github/workflows/ci.yml
  • .goreleaser.yml
  • Dockerfile
  • README.md
  • card_data/pipelines/poke_cli_dbt/dbt_project.yml
  • card_data/pipelines/tests/extract_sets_test.py
  • cmd/card/cardinfo.go
  • cmd/card/cardlist.go
  • cmd/card/setslist.go
  • docs/Infrastructure_Guide/grafana.md
  • lychee.toml
  • nfpm.yaml
  • testdata/main_latest_flag.golden
🧰 Additional context used
🧬 Code graph analysis (1)
card_data/pipelines/tests/extract_sets_test.py (1)
card_data/pipelines/defs/extract/tcgdex/extract_sets.py (1)
  • extract_sets_data (22-56)
🪛 LanguageTool
docs/Infrastructure_Guide/grafana.md

[grammar] ~16-~16: Use a hyphen to join words.
Context: ...not required for the project, but a nice to have to view more details about the d...

(QB_NEW_EN_HYPHEN)


[grammar] ~16-~16: Use a hyphen to join words.
Context: ... required for the project, but a nice to have to view more details about the data...

(QB_NEW_EN_HYPHEN)


[grammar] ~24-~24: Use a hyphen to join words.
Context: ...point._ 1. On the homepage, on the left side menu, click on the drop-down for co...

(QB_NEW_EN_HYPHEN)


[grammar] ~39-~39: Use a hyphen to join words.
Context: ...ted via the Grafana UI._ 1. On the left side menu, click on Dashboards. 2. I...

(QB_NEW_EN_HYPHEN)


[style] ~48-~48: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 2659 characters long)
Context: .../assets/grafana-dashboard-example.png) !!! warning The uid for will need to...

(EN_EXCESSIVE_EXCLAMATION)


[grammar] ~56-~56: Use a hyphen to join words.
Context: ...rometheus data source. * On the left side menu, click on the Connections ...

(QB_NEW_EN_HYPHEN)


[grammar] ~57-~57: Ensure spelling is correct
Context: ...or "prom" and then click on the default Promethesus data source. * In the URL of that p...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: tests
  • GitHub Check: benchmarks
🔇 Additional comments (21)
.goreleaser.yml (1)

17-17: LGTM!

Version bump to v1.8.4 aligns with the PR objective and matches updates across other build artifacts.

README.md (1)

5-5: LGTM!

Docker image references updated to v1.8.4, aligning with the version bump across the repository.

Also applies to: 99-99, 103-103

nfpm.yaml (1)

4-4: LGTM!

Package version updated to v1.8.4, consistent with the release version bump.

Dockerfile (1)

11-11: LGTM!

Build-time version flag updated to v1.8.4, matching the release configuration in .goreleaser.yml and other artifacts.

cmd/card/cardlist.go (1)

175-175: Verify the rationale for the 4x timeout increase.

The HTTP client timeout increased from 15 seconds to 60 seconds. While this change is applied consistently across card data fetch operations (also in setslist.go and cardinfo.go), a 60-second timeout is quite long for an interactive CLI tool and may degrade user experience.

Please confirm:

  • Is there a specific performance issue with the Supabase endpoint that necessitates this increase?
  • Have you considered alternative approaches such as response streaming, pagination, or query optimization?
  • Should there be user feedback (progress indicator) for operations that may take this long?
cmd/card/setslist.go (1)

125-125: Timeout increased consistently with other card data fetchers.

HTTP client timeout increased from 15 to 60 seconds, matching the change in cardlist.go and cardinfo.go. See the comment on cardlist.go regarding verification of this timeout increase.

cmd/card/cardinfo.go (2)

25-25: Timeout increased consistently with other card data fetchers.

HTTP client timeout increased from 15 to 60 seconds, matching changes in cardlist.go and setslist.go. See the comment on cardlist.go for verification of this timeout increase.


41-48: Decode logic refactored to avoid timeout during image processing.

The image response is now read into memory (up to 10MB limit) before decoding, preventing slow decode operations from triggering HTTP timeouts. This is a reasonable approach for a CLI tool fetching card images, as it separates network I/O concerns from image processing time. The added error handling for io.ReadAll is appropriate.

testdata/main_latest_flag.golden (1)

1-7: Golden file is correct and does not need updating.

The golden file correctly shows v1.8.3, which is the actual latest release on GitHub. The --latest flag queries the GitHub API dynamically at runtime, so the golden file captures the expected output based on GitHub's current state. Although the PR bumps version references to v1.8.4 in internal build files (nfpm.yaml, Dockerfile, etc.), that version has not yet been released on GitHub. Once v1.8.4 is released, the test will automatically reflect the new version since it queries GitHub dynamically.

Likely an incorrect or invalid review comment.

card_data/pipelines/poke_cli_dbt/dbt_project.yml (3)

2-2: LGTM! Version bump is consistent.

The version bump to 1.8.4 aligns with the PR objectives and matches the version updates across other files in this release.


18-21: Table materialization configured.

The addition of +materialized: table sets the default materialization strategy for all models in the poke_cli_dbt project to create physical tables. This is appropriate for production data that needs to be queried frequently.


23-24: Inconsistency with AI summary.

The AI summary claims the on-run-end action was removed, but it's still present in the code. The code shows create_relationships() is still configured to run after dbt executions.

card_data/pipelines/tests/extract_sets_test.py (2)

12-69: Fixture structure looks good.

The mock API response fixture provides comprehensive test data covering multiple series, null values, and edge cases.


72-168: Good test coverage.

The tests cover important scenarios including successful extraction, empty sets, and null card counts. The assertions validate both DataFrame structure and content appropriately.

lychee.toml (1)

1-9: LGTM! Reasonable link validation configuration.

The Lychee configuration is appropriate for the new link validation job. The 30-second timeout and acceptance of 429 (rate limit) responses are sensible defaults for checking documentation links.

.github/workflows/ci.yml (6)

34-34: LGTM! Version bump is consistent.

The version bump to v1.8.4 aligns with the PR objectives and matches updates across other files.


71-82: Gitleaks refactored to Docker-based execution.

The switch to running Gitleaks via Docker container is a good improvement for consistency and reproducibility. The if: always() condition on the SARIF upload ensures results are reported even when secrets are detected.


252-273: Good addition: link validation job.

The new validate-links job adds automated checking for broken links in documentation. The fail: false setting is appropriate for a newly introduced check, preventing it from blocking builds while the team addresses any existing broken links.


275-308: Well-structured docs build job.

The new build-docs-docker-image job is properly configured with:

  • Dependency on link validation ensuring docs quality
  • Sparse checkout for efficiency
  • Standard Docker build and export pattern

247-250: Inconsistency with AI summary.

The AI summary claims emoji were removed from these messages, but both lines still contain emoji (⚠️ and ✅). The current implementation is fine and provides visual clarity.


58-60: CodeQL action v4 upgrade requires verification of SARIF compliance.

The upgrade from v3 to v4 introduces a breaking change: v4 rejects uploads containing multiple SARIF runs with the same tool+category (where v3 only warned). This enforcement is effective July 22, 2025.

Verify that the results.sarif output complies with v4's requirement (each run must have a distinct tool or category). If the workflow uploads SARIF from multiple tools under the same category, the upload will fail. CodeQL-only runs are typically compatible, but confirm there are no other tools contributing to the same SARIF file, or ensure each has a distinct category.

Comment on lines +12 to +13
@pytest.fixture
def mock_api_response():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Fixture name mismatch.

The fixture is named mock_api_response but the tests on lines 74 and 105 reference mock_series_responses. This will cause pytest to fail with a "fixture not found" error.

🔎 Proposed fix
 @pytest.fixture
-def mock_api_response():
+def mock_series_responses():
     """Sample API responses matching tcgdex series format with sets"""
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@pytest.fixture
def mock_api_response():
@pytest.fixture
def mock_series_responses():
"""Sample API responses matching tcgdex series format with sets"""
🤖 Prompt for AI Agents
In card_data/pipelines/tests/extract_sets_test.py around lines 12 to 13, the
pytest fixture is defined as mock_api_response but tests at lines ~74 and ~105
reference mock_series_responses causing a "fixture not found" error; fix by
renaming the fixture to mock_series_responses (or update the test references to
mock_api_response) so names match, and ensure any import/usage elsewhere in the
file uses the chosen name consistently.

Comment on lines +49 to +52
!!! warning

The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard
will have errors on each visual. To fix this, perform the steps below.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo: "souce" → "source".

Line 51 contains a spelling error: "data souce" should be "data source".

🔎 Proposed fix
-    The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard
+    The `uid` for will need to be updated manually if the prompt for choosing a data source does not appear. If so, the dashboard
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
!!! warning
The `uid` for will need to be updated manually if the prompt for choosing a data souce does not appear. If so, the dashboard
will have errors on each visual. To fix this, perform the steps below.
!!! warning
The `uid` for will need to be updated manually if the prompt for choosing a data source does not appear. If so, the dashboard
will have errors on each visual. To fix this, perform the steps below.
🤖 Prompt for AI Agents
In docs/Infrastructure_Guide/grafana.md around lines 49 to 52, fix the spelling
typo "data souce" to "data source" in the warning paragraph; update the sentence
to read "...if the prompt for choosing a data source does not appear." and
ensure surrounding punctuation/grammar remains correct.

Comment on lines +55 to +58
1. Grab the `uid` for the Prometheus data source.
* On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**.
* Search for "prom" and then click on the default Promethesus data source.
* In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo: "Promethesus" → "Prometheus".

Line 57 contains a spelling error in the product name.

🔎 Proposed fix
-    * Search for "prom" and then click on the default Promethesus data source.
+    * Search for "prom" and then click on the default Prometheus data source.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
1. Grab the `uid` for the Prometheus data source.
* On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**.
* Search for "prom" and then click on the default Promethesus data source.
* In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`.
1. Grab the `uid` for the Prometheus data source.
* On the left side menu, click on the **Connections** dropdown. Then, choose **Data Sources**.
* Search for "prom" and then click on the default Prometheus data source.
* In the URL of that page, the last section is the `uid`: `/datasources/edit/<uid>`.
🧰 Tools
🪛 LanguageTool

[grammar] ~56-~56: Use a hyphen to join words.
Context: ...rometheus data source. * On the left side menu, click on the Connections ...

(QB_NEW_EN_HYPHEN)


[grammar] ~57-~57: Ensure spelling is correct
Context: ...or "prom" and then click on the default Promethesus data source. * In the URL of that p...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
docs/Infrastructure_Guide/grafana.md around lines 55 to 58: there's a
misspelling of "Prometheus" written as "Promethesus" on line 57; fix the typo by
replacing "Promethesus" with the correct product name "Prometheus" so the
sentence reads "click on the default Prometheus data source" and update any
other occurrences in this block if present.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants