Skip to content

Commit 8e82adf

Browse files
authored
Update README.md
simplify
1 parent a98f4a3 commit 8e82adf

File tree

1 file changed

+19
-27
lines changed

1 file changed

+19
-27
lines changed

README.md

Lines changed: 19 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -7,24 +7,15 @@ High performance, thread-safe in-memory caching primitives for .NET.
77
# Installing via NuGet
88
`Install-Package BitFaster.Caching`
99

10-
# Overview
11-
12-
| Class | Description |
13-
|:-------|:---------|
14-
| [ConcurrentLru](https://github.com/bitfaster/BitFaster.Caching/wiki/ConcurrentLru) | Represents a thread-safe bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15-
| [ConcurrentTLru](https://github.com/bitfaster/BitFaster.Caching/wiki/ConcurrentTLru) | Represents a thread-safe bounded size pseudo TLRU, items have TTL.<br><br>As ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTLru is eventually consistent where the inconsistency window = TTL. |
16-
| SingletonCache | Represents a thread-safe cache of key value pairs, which guarantees a single instance of each value. Values are discarded immediately when no longer in use to conserve memory. |
17-
| Scoped<IDisposable> | Represents a thread-safe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
18-
1910
# Quick Start
2011

2112
Please refer to the [wiki](https://github.com/bitfaster/BitFaster.Caching/wiki) for more detailed documentation.
2213

23-
## ConcurrentLru/ConcurrentTLru
14+
## ConcurrentLru
2415

25-
`ConcurrentLru` and `ConcurrentTLru` are intended as a drop in replacement for `ConcurrentDictionary`, and a much faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
16+
`ConcurrentLru` is intended as a light weight drop in replacement for `ConcurrentDictionary`, and a faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
2617

27-
Choose a capacity and use just like ConcurrentDictionary:
18+
Choose a capacity and use just like ConcurrentDictionary, but with bounded size:
2819

2920
```csharp
3021
int capacity = 666;
@@ -49,11 +40,24 @@ var lru = new ConcurrentLruBuilder<int, SomeItem>()
4940
var value = lru.GetOrAdd(1, (k) => new SomeItem(k));
5041
```
5142

52-
## Caching IDisposable objects
43+
## Time based eviction
44+
45+
`ConcurrentTLru` functions the same as `ConcurrentLru`, but entries also expire after a fixed duration since an entry's creation or most recent replacement. This can be used to remove stale items. If the values generated for each key can change over time, `ConcurrentTLru` is eventually consistent where the inconsistency window = time to live (TTL).
46+
47+
```csharp
48+
var lru = new ConcurrentLruBuilder<int, SomeItem>()
49+
.WithCapacity(666)
50+
.WithExpireAfterWrite(TimeSpan.FromMinutes(5))
51+
.Build();
52+
53+
var value = lru.GetOrAdd(1, (k) => new SomeItem(k));
54+
```
55+
56+
## Caching IDisposable values
5357

5458
It can be useful to combine object pooling and caching to reduce allocations, using IDisposable to return objects to the pool. All cache classes in BitFaster.Caching own the lifetime of cached values, and will automatically dispose values when they are evicted.
5559

56-
To avoid races using objects after they have been disposed by the cache, use `IScopedCache which` wraps values in `Scoped<T>`. The call to `ScopedGetOrAdd` creates a `Lifetime` that guarantees the scoped object will not be disposed until the lifetime is disposed. Scoped cache is thread safe, and guarantees correct disposal for concurrent lifetimes.
60+
To avoid races using objects after they have been disposed by the cache, use `IScopedCache` which wraps values in `Scoped<T>`. The call to `ScopedGetOrAdd` creates a `Lifetime` that guarantees the scoped object will not be disposed until the lifetime is disposed. Scoped cache is thread safe, and guarantees correct disposal for concurrent lifetimes.
5761

5862
```csharp
5963
var lru = new ConcurrentLruBuilder<int, Disposable>()
@@ -79,7 +83,7 @@ class SomeDisposableValueFactory
7983

8084
## Caching Singletons by key
8185

82-
`SingletonCache` enables mapping every key to a single instance of a value, and keeping the value alive only while it is in use. This is useful when the total number of keys is large, but few will be in use at any moment.
86+
`SingletonCache` enables mapping every key to a single instance of a value, and keeping the value alive only while it is in use. This is useful when the total number of keys is large, but few will be in use at any moment and removing an item while in use would result in an invalid program state.
8387

8488
The example below shows how to implement exclusive Url access using a lock object per Url.
8589

@@ -100,18 +104,6 @@ using (var lifetime = urlLocks.Acquire(url))
100104
```
101105

102106

103-
### Why not use MemoryCache?
104-
105-
MemoryCache has these limitations (see [here](https://github.com/bitfaster/BitFaster.Caching/wiki) for more detail):
106-
107-
- No support for atomic adds, which can lead to the [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) failure.
108-
- It's not generic, and therefore boxes value types for both keys and values.
109-
- System.Runtime.Caching uses string keys, therefore lookups require heap allocations when the native key type is not type string.
110-
- Is not 'scan' resistant: fetching all keys will try to load everything into memory, which is bad.
111-
- Non-optimal eviction policy. MemoryCache uses an heuristic to estimate memory used, and evicts items using a timer based background thread. The 'trim' process may remove useful items, and if the timer does not fire fast enough the resulting memory pressure can be problematic (e.g. thrashing, out of memory, increased GC).
112-
- Does not scale well with concurrent writes.
113-
- Contains perf counters that can't be disabled.
114-
115107
# Performance
116108

117109
*DISCLAIMER: Always measure performance in the context of your application. The results provided here are intended as a guide.*

0 commit comments

Comments
 (0)