Commit graph

14 commits

Author SHA1 Message Date
Sargun Dhillon df91803297 Add TrackingSet method
This method reduces the likelihood of a race condition where
you can add a (tracked) item to the cache, and the item isn't
the item you thought it was.
2020-08-13 10:43:38 -07:00
Andrew Krasichkov 8d8b062716 fixed bucket tests 2019-02-14 15:54:01 +03:00
Karl Seguin 6df1e24ae3 2 changes:
1 -
Previously, we determined if an item should be promoted in the main getter
thread. This required that we protect the item.promotions variable, as both
the getter and the worker were concurrently accessing it. This change pushes
the conditional promotion to the worker (from the getter's point of view, items
are always promoted). Since only the worker ever accesses .promotions, we no
longer must protect access to it.

2 -
The total size of the cache was being maintained by both the worker thread
and the calling code. This required that we protect access to cache.size. Now,
only the worker ever changes the size. While this simplifies much of the code,
it means that we can't easily replace an item (replacement either via Set or
Replace). A replcement now involves creating a new object and deleting the old
one (using the existing deletables and promotable infrastructure). The only
noticeable impact frmo this change is that, despite previous documentation,
Replace WILL cause the item to be promoted (but it still only does so if it
exists and it still doesn't extend the original TTL).
2014-12-28 11:11:32 +07:00
Karl Seguin ff8727e847 initial work on tracking cache by item size 2014-11-21 14:39:25 +07:00
Karl Seguin df2f8eb082 Added documentation.
Bucket and LayeredBucket are no longer exported.
2014-11-14 07:56:24 +07:00
Karl Seguin 3e4d668990 blank identifier for tests 2014-11-14 07:41:22 +07:00
Karl Seguin 2131ac5052 better tests 2014-11-13 22:26:05 +07:00
Karl Seguin cc0395a391 added replace method 2014-11-13 22:20:12 +07:00
Karl Seguin 967200d7bc switched from gspec -> expect 2014-10-25 07:46:18 +07:00
Karl Seguin 890bb18dbf The cache can now do reference counting so that the LRU algorithm is aware of
long-lived objects and won't clean them up. Oftentimes, the value returned
from a cache hit is short-lived. As a silly example:

	func GetUser(http.responseWrite) {
		user := cache.Get("user:1")
		response.Write(serialize(user))
	}

It's fine if the cache's GC cleans up "user:1" while the user variable has a reference to the
object..the cache's reference is removed and the real GC will clean it up
at some point after the user variable falls out of scope.

However, what if user is long-lived? Possibly stored as a reference to another
cached object? Normally (without this commit) the next time you call
cache.Get("user:1"), you'll get a miss and will need to refetch the object; even
though the original user object is still somewhere in memory - you just lost
your reference to it from the cache.

By enabling the Track() configuration flag, and calling TrackingGet() (instead
of Get), the cache will track that the object is in-use and won't GC it (even
if there's great memory pressure (what's the point? something else is holding on
to it anyways). Calling item.Release() will decrement the number of references.
When the count is 0, the item can be pruned from the cache.

The returned value is a TrackedItem which exposes:

- Value() interface{} (to get the actual cached value)
- Release() to release the item back in the cache
2014-02-28 20:10:42 +08:00
Karl Seguin 6720535fab Checking if an item should be promoted because it's new is the uncommon
case which we can optimize out by being more explicit when we create
a new item
2013-11-13 13:46:41 +08:00
Karl Seguin 751266c34a Remove Value interface, cache now works against interface{} with the
expiry specified on a Set.

Get no longer returns expired items

Items can now be deleted
2013-10-30 20:18:51 +08:00
Karl Seguin 7a8102e166 clean up TestValue 2013-10-19 20:45:30 +08:00
Karl Seguin b3cbd19186 initial tests 2013-10-19 20:36:33 +08:00