A golang LRU Cache for high concurrency
Find a file
Karl Seguin 890bb18dbf The cache can now do reference counting so that the LRU algorithm is aware of
long-lived objects and won't clean them up. Oftentimes, the value returned
from a cache hit is short-lived. As a silly example:

	func GetUser(http.responseWrite) {
		user := cache.Get("user:1")
		response.Write(serialize(user))
	}

It's fine if the cache's GC cleans up "user:1" while the user variable has a reference to the
object..the cache's reference is removed and the real GC will clean it up
at some point after the user variable falls out of scope.

However, what if user is long-lived? Possibly stored as a reference to another
cached object? Normally (without this commit) the next time you call
cache.Get("user:1"), you'll get a miss and will need to refetch the object; even
though the original user object is still somewhere in memory - you just lost
your reference to it from the cache.

By enabling the Track() configuration flag, and calling TrackingGet() (instead
of Get), the cache will track that the object is in-use and won't GC it (even
if there's great memory pressure (what's the point? something else is holding on
to it anyways). Calling item.Release() will decrement the number of references.
When the count is 0, the item can be pruned from the cache.

The returned value is a TrackedItem which exposes:

- Value() interface{} (to get the actual cached value)
- Release() to release the item back in the cache
2014-02-28 20:10:42 +08:00
bucket.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
bucket_test.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
cache.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
cache_test.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
configuration.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
item.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
item_test.go The cache can now do reference counting so that the LRU algorithm is aware of 2014-02-28 20:10:42 +08:00
license.txt Added license 2013-11-17 20:47:28 +08:00
readme.md Remove Value interface, cache now works against interface{} with the 2013-10-30 20:18:51 +08:00

CCache

CCache is an LRU Cache, written in Go, focused on supporting high concurrency.

Lock contention on the list is reduced by:

  • Introducing a window which limits the frequency that an item can get promoted
  • Using a buffered channel to queue promotions for a single worker
  • Garbage collecting within the same thread as the worker