You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-27Lines changed: 19 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,24 +7,15 @@ High performance, thread-safe in-memory caching primitives for .NET.
7
7
# Installing via NuGet
8
8
`Install-Package BitFaster.Caching`
9
9
10
-
# Overview
11
-
12
-
| Class | Description |
13
-
|:-------|:---------|
14
-
|[ConcurrentLru](https://github.com/bitfaster/BitFaster.Caching/wiki/ConcurrentLru)| Represents a thread-safe bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15
-
|[ConcurrentTLru](https://github.com/bitfaster/BitFaster.Caching/wiki/ConcurrentTLru)| Represents a thread-safe bounded size pseudo TLRU, items have TTL.<br><br>As ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTLru is eventually consistent where the inconsistency window = TTL. |
16
-
| SingletonCache | Represents a thread-safe cache of key value pairs, which guarantees a single instance of each value. Values are discarded immediately when no longer in use to conserve memory. |
17
-
| Scoped<IDisposable> | Represents a thread-safe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
18
-
19
10
# Quick Start
20
11
21
12
Please refer to the [wiki](https://github.com/bitfaster/BitFaster.Caching/wiki) for more detailed documentation.
22
13
23
-
## ConcurrentLru/ConcurrentTLru
14
+
## ConcurrentLru
24
15
25
-
`ConcurrentLru`and `ConcurrentTLru` are intended as a drop in replacement for `ConcurrentDictionary`, and a much faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
16
+
`ConcurrentLru`is intended as a light weight drop in replacement for `ConcurrentDictionary`, and a faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
26
17
27
-
Choose a capacity and use just like ConcurrentDictionary:
18
+
Choose a capacity and use just like ConcurrentDictionary, but with bounded size:
28
19
29
20
```csharp
30
21
intcapacity=666;
@@ -49,11 +40,24 @@ var lru = new ConcurrentLruBuilder<int, SomeItem>()
49
40
varvalue=lru.GetOrAdd(1, (k) =>newSomeItem(k));
50
41
```
51
42
52
-
## Caching IDisposable objects
43
+
## Time based eviction
44
+
45
+
`ConcurrentTLru` functions the same as `ConcurrentLru`, but entries also expire after a fixed duration since an entry's creation or most recent replacement. This can be used to remove stale items. If the values generated for each key can change over time, `ConcurrentTLru` is eventually consistent where the inconsistency window = time to live (TTL).
46
+
47
+
```csharp
48
+
varlru=newConcurrentLruBuilder<int, SomeItem>()
49
+
.WithCapacity(666)
50
+
.WithExpireAfterWrite(TimeSpan.FromMinutes(5))
51
+
.Build();
52
+
53
+
varvalue=lru.GetOrAdd(1, (k) =>newSomeItem(k));
54
+
```
55
+
56
+
## Caching IDisposable values
53
57
54
58
It can be useful to combine object pooling and caching to reduce allocations, using IDisposable to return objects to the pool. All cache classes in BitFaster.Caching own the lifetime of cached values, and will automatically dispose values when they are evicted.
55
59
56
-
To avoid races using objects after they have been disposed by the cache, use `IScopedCache which` wraps values in `Scoped<T>`. The call to `ScopedGetOrAdd` creates a `Lifetime` that guarantees the scoped object will not be disposed until the lifetime is disposed. Scoped cache is thread safe, and guarantees correct disposal for concurrent lifetimes.
60
+
To avoid races using objects after they have been disposed by the cache, use `IScopedCache` which wraps values in `Scoped<T>`. The call to `ScopedGetOrAdd` creates a `Lifetime` that guarantees the scoped object will not be disposed until the lifetime is disposed. Scoped cache is thread safe, and guarantees correct disposal for concurrent lifetimes.
57
61
58
62
```csharp
59
63
varlru=newConcurrentLruBuilder<int, Disposable>()
@@ -79,7 +83,7 @@ class SomeDisposableValueFactory
79
83
80
84
## Caching Singletons by key
81
85
82
-
`SingletonCache` enables mapping every key to a single instance of a value, and keeping the value alive only while it is in use. This is useful when the total number of keys is large, but few will be in use at any moment.
86
+
`SingletonCache` enables mapping every key to a single instance of a value, and keeping the value alive only while it is in use. This is useful when the total number of keys is large, but few will be in use at any moment and removing an item while in use would result in an invalid program state.
83
87
84
88
The example below shows how to implement exclusive Url access using a lock object per Url.
85
89
@@ -100,18 +104,6 @@ using (var lifetime = urlLocks.Acquire(url))
100
104
```
101
105
102
106
103
-
### Why not use MemoryCache?
104
-
105
-
MemoryCache has these limitations (see [here](https://github.com/bitfaster/BitFaster.Caching/wiki) for more detail):
106
-
107
-
- No support for atomic adds, which can lead to the [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) failure.
108
-
- It's not generic, and therefore boxes value types for both keys and values.
109
-
- System.Runtime.Caching uses string keys, therefore lookups require heap allocations when the native key type is not type string.
110
-
- Is not 'scan' resistant: fetching all keys will try to load everything into memory, which is bad.
111
-
- Non-optimal eviction policy. MemoryCache uses an heuristic to estimate memory used, and evicts items using a timer based background thread. The 'trim' process may remove useful items, and if the timer does not fire fast enough the resulting memory pressure can be problematic (e.g. thrashing, out of memory, increased GC).
112
-
- Does not scale well with concurrent writes.
113
-
- Contains perf counters that can't be disabled.
114
-
115
107
# Performance
116
108
117
109
*DISCLAIMER: Always measure performance in the context of your application. The results provided here are intended as a guide.*
0 commit comments