Skip to content

Conversation

@hagrawal4
Copy link
Collaborator

@hagrawal4 hagrawal4 commented Nov 26, 2025

Type of Change

  • Refactoring

What changed?

  • Adding user interaction like clicking on page to resolve the browser metric.
  • Update the performance tests logic to add rfu time instead of the time to resolve the browser metric(bulk endpoint)

Context (Why?)

dc-browser-metric v10.4.0 introduced a new LCP metric which requires user interaction to resolve browser metric. Performance tests are not able to resolve the browser metric which is causing performance regression

Here's the detailed analysis doc on performance regression: https://hello.atlassian.net/wiki/spaces/CSD/pages/6133557390/Performance+Testing+Challenges

Tests

  • DCAPT version comparison results:
    9.2.10 vs 10.2.0-beta7 link, 10.2.0-beta7 has newer browser-metric v10.4.0
    9.2.10 vs 10.2.0-rc1 link, 10.2.0-rc1 doesn't have newer browser-metric changes

  • Some of the non-dom actions > 10% p90_diff which is expected because the new logic is adding rfu time to the tests which is greater than(in some scenarios) the time to resolve browser metric(older browser metric without lcp)

@harippriya-atlassian
Copy link

@hagrawal4 can you please add a link to the build where these test changes executed successfully? thanks

@hagrawal4 hagrawal4 force-pushed the issue/DAPT-26-add-user-interaction-to-resolve-browser-metric branch from 7954ce7 to e9cee29 Compare December 4, 2025 11:14
@rjatkins
Copy link

rjatkins commented Dec 5, 2025

I'm actually not expecting any measurable change in when Ready For User is triggered with the new dc-browser-metrics. There must be something wrong with the implementation if it's causing RFU to increase. Do we take a timestamp as soon as metric.end() is invoked? Why would that be changed by resolving every metric we include with the RFU data?

return wrapper


def measure_with_browser_metrics(interaction_name, webdriver, datasets, measure_func, post_metric_measure_func=None):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good impl and metrics collection

@hagrawal4
Copy link
Collaborator Author

I'm actually not expecting any measurable change in when Ready For User is triggered with the new dc-browser-metrics. There must be something wrong with the implementation if it's causing RFU to increase. Do we take a timestamp as soon as metric.end() is invoked? Why would that be changed by resolving every metric we include with the RFU data?

The tests are collecting multiple data in a test including rfu. There is no increase in the rfu timing. measure_browser_navi_metrics is what caused the increase in timing earlier. It takes multiple attempts with 0.5 sleep to resolve browser metric. In the PR, we are now not considering the time to resolve browser metric but only the rfu time now

More details here: https://atlassian.slack.com/archives/CFHUQ2YQP/p1764913016398969?thread_ts=1763443821.738109&cid=CFHUQ2YQP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants