Commit graph

22 commits

Author SHA1 Message Date
Karl Seguin
3385784411 Add cache.ItemCount() intt64 API 2019-01-26 12:33:50 +07:00
Karl Seguin
692cd618b2 guard access to item.promotions in LayeredCache, which was applied to Cache in 557d56ec6f 2018-12-27 22:54:50 +07:00
Alexej Kubarev
7421e2d7b4
Adding support for OnDelete callback function
OnDelete will receive an item that is being processed for deletion to support calling cleanup function specific to the item stored
2018-07-16 18:20:17 +02:00
Anthony Romano
c69270ce08 layeredcache: add Stop() and fix races in tests
worker goroutine running concurrently with tests would cause data race errors
when running tests with -race enabled.
2017-02-13 15:39:24 -08:00
Jens Deppe
a451d7262c Integrate feedback and upstream fixes
- Ensure correct locking in GetOrCreateSecondaryCache
- Fetch now returns a *Item
2016-11-01 23:53:22 -07:00
Jens Deppe
d2c2442186 Merge remote-tracking branch 'seguin/master' 2016-11-01 20:33:44 -07:00
Karl Seguin
8adbb5637b return *Item from layered cache fetch instead of interface{} 2016-11-02 09:34:09 +07:00
Jens Deppe
c1634a4d00 Add concept of a SecondaryCache which exposes the secondary part of a LayeredCache 2016-11-01 09:01:39 -07:00
Matthew Dale
162d4e27ca Use nanosecond-resolution TTL instead of second-resolution. 2016-07-07 15:32:49 -07:00
Karl Seguin
f9c7f14b7b Fetch's API wasn't usable. It returned different values types based on whether
the fetch was needed or not. It now behaves consistently (with itself and with
Get), returning an *Item.
2015-01-07 08:09:39 +07:00
Karl Seguin
6df1e24ae3 2 changes:
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.

2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).
2014-12-28 11:11:32 +07:00
Karl Seguin
78e597cdae replace is size-aware 2014-11-21 15:45:11 +07:00
Karl Seguin
41ccfbb39a renamed MaxItems to MaxSize, updated readme 2014-11-21 15:06:27 +07:00
Karl Seguin
c810d4feb3 test + fix for actual size function 2014-11-21 14:59:04 +07:00
Karl Seguin
ff8727e847 initial work on tracking cache by item size 2014-11-21 14:39:25 +07:00
Karl Seguin
44cdb043d1 Move size tracking to a variable, away from simply using the length of the list.
This paves the way for more complex size tracking.
2014-11-20 07:03:59 +07:00
Karl Seguin
df2f8eb082 Added documentation.
Bucket and LayeredBucket are no longer exported.
2014-11-14 07:56:24 +07:00
Karl Seguin
7316f99bd9 replace on layeredcache 2014-11-13 22:23:52 +07:00
Karl Seguin
5e131cc17c Buckets must be a power of 2. Move from % to & for determining the bucket. 2014-11-02 18:09:49 +07:00
Karl Seguin
624c03cd3e delete and deleteall return boolean to indicate if delete found the item 2014-10-27 08:30:48 +07:00
Karl Seguin
77765a3f11 Get now returns the *Item rather than the item's value. Get no longer actively
purges stale items.

Combining these two changes, CCache can now be used to implement both of
Varnish's grace and saint mode.
2014-10-25 17:15:47 +07:00
Karl Seguin
0c7492b382 Added layered cache 2014-10-25 12:19:14 +07:00