Skip to content

Conversation

@chatton
Copy link
Collaborator

@chatton chatton commented Nov 10, 2025

Overview

To ensure a smoother graph, we can ensure that there is always a metric for submission duration in the 60 second window.

@chatton chatton requested a review from auricom November 10, 2025 12:40
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @chatton, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the metrics system by implementing a periodic refresh mechanism for submission duration metrics. The change ensures that monitoring dashboards consistently display up-to-date submission duration data, preventing gaps or staleness in graphs when actual submissions are sparse, thereby improving the reliability and readability of performance insights.

Highlights

  • Continuous Metric Reporting: Introduced a mechanism to ensure the 'submission duration' metric is continuously reported within a 60-second window, even if new submissions are infrequent, to provide smoother graph visualizations.
  • New Refresh Mechanism: A RefreshSubmissionDuration method was added to the Metrics struct, which re-observes the last recorded submission durations for all tracked chain ID and submission type combinations.
  • Periodic Refresh Trigger: A time.NewTicker was integrated into the verifier's ExportMetrics loop to trigger the RefreshSubmissionDuration method every 10 seconds, ensuring metrics stay fresh.
  • Internal Duration Tracking: The Metrics struct now includes a lastSubmissionDurations map to store the most recent submission duration for each unique chainID:namespace key, enabling the refresh functionality.
  • Enhanced Testing: New unit tests were added to cover the functionality of RecordSubmissionDuration and RefreshSubmissionDuration, including a test for refreshing when no durations have been recorded.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to periodically refresh the submission_duration metric, ensuring its presence in scrapes for smoother graphing. This is achieved by adding a ticker in the verifier to call a new RefreshSubmissionDuration function. While the overall approach is sound, I've identified a high-severity bug in the key parsing logic for refreshing metrics, which could lead to silent failures if a chainID contains a colon. I've provided a robust fix for this. Additionally, I've suggested improving test coverage for the new refresh functionality to include a positive test case.

Comment on lines +478 to 489
func (m *Metrics) RefreshSubmissionDuration() {
m.mu.Lock()
defer m.mu.Unlock()

for key, duration := range m.lastSubmissionDurations {
// assuming format "chainID:namespace"
parts := strings.Split(key, ":")
if len(parts) == 2 {
m.SubmissionDuration.WithLabelValues(parts[0], parts[1]).Observe(duration.Seconds())
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current method of parsing the metric key using strings.Split(key, ":") is not robust. If a chainID contains a colon, the key will be split into more than two parts, causing the len(parts) == 2 check to fail. This would result in the metric for that series not being refreshed, without any error or warning. A more robust approach is to split the key at the last colon, which correctly separates the chainID from the submissionType, assuming the submissionType itself does not contain a colon.

Suggested change
func (m *Metrics) RefreshSubmissionDuration() {
m.mu.Lock()
defer m.mu.Unlock()
for key, duration := range m.lastSubmissionDurations {
// assuming format "chainID:namespace"
parts := strings.Split(key, ":")
if len(parts) == 2 {
m.SubmissionDuration.WithLabelValues(parts[0], parts[1]).Observe(duration.Seconds())
}
}
}
func (m *Metrics) RefreshSubmissionDuration() {
m.mu.Lock()
defer m.mu.Unlock()
for key, duration := range m.lastSubmissionDurations {
lastColonIdx := strings.LastIndex(key, ":")
// Ensure the key is in the expected "chainID:submissionType" format and both parts are non-empty.
if lastColonIdx > 0 && lastColonIdx < len(key)-1 {
chainID := key[:lastColonIdx]
submissionType := key[lastColonIdx+1:]
m.SubmissionDuration.WithLabelValues(chainID, submissionType).Observe(duration.Seconds())
}
}
}

Comment on lines +556 to +564
func TestMetrics_RefreshSubmissionDuration_Empty(t *testing.T) {
reg := prometheus.NewRegistry()
m := NewWithRegistry("test", reg)

// call refresh without any recorded values - should not panic
require.NotPanics(t, func() {
m.RefreshSubmissionDuration()
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test correctly ensures that RefreshSubmissionDuration doesn't panic when there are no durations to refresh. However, there is no test case for the "happy path" where it successfully refreshes a metric. It would be beneficial to add a test that:

  1. Records a submission duration.
  2. Calls RefreshSubmissionDuration.
  3. Verifies that the Prometheus summary metric has been updated accordingly.

This might require a helper function to inspect summary metrics, as the existing getMetricValue is designed for gauges. You could, for example, gather metrics and check the _count or _sum for the summary.

@chatton chatton merged commit a29e844 into main Nov 12, 2025
3 checks passed
@chatton chatton deleted the cian/fix-submission-duration-sections branch November 12, 2025 13:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants