Skip to content

Conversation

@neoXfire
Copy link

Hi all,

Here is a proposal to enrich PoolingHttpClientConnectionManagerBuilder with a new method to customize the ConnPoolListener<HttpRoute> used in the underlying pool.

That will allow to implement custom instrumentation, to measure the time spent waiting in the pool before a connection get available for example.

Regards,

@ok2c
Copy link
Member

ok2c commented Dec 17, 2025

@neoXfire Sure. We can do that. Please add a symmetric change to PoolingAsyncClientConnectionManager and its builder and add @since 5.7 tag to all new methods

@ok2c
Copy link
Member

ok2c commented Dec 17, 2025

@neoXfire Please fix the code style violations. Looks good otherwise.

@neoXfire neoXfire force-pushed the custom-connPoolListener branch 2 times, most recently from 9ea2c7e to 6f99a00 Compare December 17, 2025 10:50
@neoXfire
Copy link
Author

Hello @ok2c,

Thank you for the quick feedback.

Please note one of the new pool implementation in core5 (RouteSegmentedConnPool) do not ship this ConnPoolListener interface.

I was wondering if I should also update the core so that pool event subscription is consistent between all implementations ?

@ok2c
Copy link
Member

ok2c commented Dec 17, 2025

Hello @ok2c,

Thank you for the quick feedback.

Please note one of the new pool implementation in core5 (RouteSegmentedConnPool) do not ship this ConnPoolListener interface.

I was wondering if I should also update the core so that pool event subscription is consistent between all implementations ?

@neoXfire Yes, please do so. Could you please also add a TODO comment to OFFLOCK policy about missing ConnPoolListener support in both connection managers?

@neoXfire neoXfire marked this pull request as draft December 17, 2025 12:37
@arturobernalg
Copy link
Member

Hello @ok2c,
Thank you for the quick feedback.
Please note one of the new pool implementation in core5 (RouteSegmentedConnPool) do not ship this ConnPoolListener interface.
I was wondering if I should also update the core so that pool event subscription is consistent between all implementations ?

@neoXfire Yes, please do so. Could you please also add a TODO comment to OFFLOCK policy about missing ConnPoolListener support in both connection managers?

I’m not convinced OFFLOCK needs ConnPoolListener at all.
RouteSegmentedConnPool is intentionally lean / low-level: adding listener callbacks means extra branches and memory traffic on the hot lease/release paths, which is exactly what OFFLOCK is trying to minimize.

…syncClientConnectionManagerBuilder` to configure a listener for connection pool events (onLease/onRelease)

This can be use to implement custom instrumentation (time blocked in http pool for example)
@neoXfire neoXfire force-pushed the custom-connPoolListener branch from 6f99a00 to d67ac66 Compare December 17, 2025 13:40
@ok2c
Copy link
Member

ok2c commented Dec 17, 2025

I’m not convinced OFFLOCK needs ConnPoolListener at all.
RouteSegmentedConnPool is intentionally lean / low-level: adding listener callbacks means extra branches and memory traffic on the hot lease/release paths, which is exactly what OFFLOCK is trying to minimize.

@arturobernalg This may well be the case, null listener should have no performance impact, should it? And if the user explicitly chooses OFFLOCK policy and a custom listener, why should we deny that? Just because the pool may have sub-optimal performance?

Copy link
Member

@ok2c ok2c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in favor of committing this change-set

@arturobernalg
Copy link
Member

custom

@ok2c Agreed. Null listener is a no-op on the hot path apart from a predictable null check, so it should not have any measurable impact.

@neoXfire
Copy link
Author

Hello @ok2c,
Thank you for the quick feedback.
Please note one of the new pool implementation in core5 (RouteSegmentedConnPool) do not ship this ConnPoolListener interface.
I was wondering if I should also update the core so that pool event subscription is consistent between all implementations ?

@neoXfire Yes, please do so. Could you please also add a TODO comment to OFFLOCK policy about missing ConnPoolListener support in both connection managers?

I’m not convinced OFFLOCK needs ConnPoolListener at all. RouteSegmentedConnPool is intentionally lean / low-level: adding listener callbacks means extra branches and memory traffic on the hot lease/release paths, which is exactly what OFFLOCK is trying to minimize.

I think with an appropriate implentation (notification callbacks stored in a ConnPoolListener<T> final field), in a sufficiently hot method and for the default case with no callbacks set up (connPoolListener == nul), the JIT can optimize this down to the same as if that if the conditional block for the callback event firing was never added.

If there is no intent to drop STRICT and LAX support in the future to replace it by this new lock-free implementation OFFLOCK only, it is fine and I can probably stick to the two legacy implementations only for my change. However, if it is actually the plan, I think you should try to find the right balance between raw performance and thrid-party plugability (observability, instrumentation...). That last aspect is for a lot of people as important as the first one.

Regards,

@neoXfire neoXfire marked this pull request as ready for review December 17, 2025 15:49
@ok2c ok2c merged commit 7746feb into apache:master Dec 18, 2025
10 checks passed
@ok2c
Copy link
Member

ok2c commented Dec 18, 2025

@neoXfire Will you also create a follow-up pull request for OFFLOCK pool in core? @arturobernalg would you rather do it yourself?

@arturobernalg
Copy link
Member

@neoXfire Will you also create a follow-up pull request for OFFLOCK pool in core? @arturobernalg would you rather do it yourself?

I'll do the change @ok2c . I'm just waiting that the running the performance test ended .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants