Skip to content

Conversation

@joelwurtz
Copy link
Contributor

@joelwurtz joelwurtz commented Nov 13, 2024

Not sure you want this, but we have done this in our fork to match our behavior.

We mainly use this library in a binary on our server that read logs from a message queue and parse them, including device detection using this library.

We have multiple threads running it to parallelize the process of logs parsing, and since this library consume most of the cpu (no blaming here, i totally understand why it's slow, but it's a fact). we use a specific device detector in each thread (so other threads are not blocked by it).

In order to avoid having a specific cache for each thread we added this code, so even if we have multiple instance of device detector they all use the same cache and it avoid consuming too much memory for the same data.

We also added get_size library that allows to correctly guess the size of a cache structure for moka, and then correctly set limit for our cache in terms of memory and not in terms of items

@joelwurtz
Copy link
Contributor Author

Hum, forget there was an ffi export of this lib, not sure it works well with future then, there is solution for that, just close this then if this is not a wanted behavior (but can do change also)

@mindreader
Copy link
Contributor

I saw you make this change to the cache from your fork some time ago and I almost reached out out of curiosity.

I like the change in general, but using an async moka cache means that every function that does anything has to become async, even if you don't use a cache in the first place. It is not clear to me that there are any benefits to it either, as the sync moka is thread safe and very fast, it is unlikely to be a bottleneck. Did you find otherwise?

get-size is interesting and that's a good feature to add. I had wondered if there was a way to automatically choose a cache entry size.

@joelwurtz
Copy link
Contributor Author

I like the change in general, but using an async moka cache means that every function that does anything has to become async, even if you don't use a cache in the first place. It is not clear to me that there are any benefits to it either, as the sync moka is thread safe and very fast, it is unlikely to be a bottleneck. Did you find otherwise?

I guess we could use that, it's just that i'm using this lib in a async context and it's generally better to avoid blocking in those case, but can make the changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants