Commit graph

14 commits

Author SHA1 Message Date
Karl Seguin 41ccfbb39a renamed MaxItems to MaxSize, updated readme 2014-11-21 15:06:27 +07:00
Karl Seguin ff8727e847 initial work on tracking cache by item size 2014-11-21 14:39:25 +07:00
Karl Seguin df2f8eb082 Added documentation.
Bucket and LayeredBucket are no longer exported.
2014-11-14 07:56:24 +07:00
Karl Seguin 5e131cc17c Buckets must be a power of 2. Move from % to & for determining the bucket. 2014-11-02 18:09:49 +07:00
Karl Seguin 967200d7bc switched from gspec -> expect 2014-10-25 07:46:18 +07:00
Karl Seguin a81a0f665c Changed some config defaults.
Added documentation
2014-10-14 13:43:34 +07:00
Karl Seguin d9d6e2b00e This is a sad commit.
How do you decide you need to purge your cache? Relying on runtime.ReadMemStats
sucks for two reasons. First, it's a stop-the-world call, which is pretty bad
in general and down right stupid for a supposedly concurrent-focused package.
Secondly, it only tells you the total memory usage, but most time you really
want to limit the amount of memory the cache itself uses.

Since there's no great way to determine the size of an object, that means users
need to supply the size. One way is to make it so that any cached item satisfies
a simple interface which exposes a Size() method. With this, we can track how
much memory is set put and a delete releases. But it's hard for consumers to
know how much memory they're taking when storing complex object (the entire point
of an in-process cache is to avoid having to serialize the data). Since any Size()
is bound to be a rough guess, we can simplify the entire thing by evicting based
on # of items.

This works really bad when items vary greatly in size (an HTTP cache), but in
a lot of other cases it works great. Furthermore, even for an HTTP cache, given
enough values, it should average out in most cases.

Whatever. This improve performance and should improve the usability of the cache.
It is a pretty big breaking change though.
2014-04-08 23:36:28 +08:00
Karl Seguin 4456358470 config documentation 2014-03-01 00:44:39 +08:00
Karl Seguin 890bb18dbf The cache can now do reference counting so that the LRU algorithm is aware of
long-lived objects and won't clean them up. Oftentimes, the value returned
from a cache hit is short-lived. As a silly example:

	func GetUser(http.responseWrite) {
		user := cache.Get("user:1")
		response.Write(serialize(user))
	}

It's fine if the cache's GC cleans up "user:1" while the user variable has a reference to the
object..the cache's reference is removed and the real GC will clean it up
at some point after the user variable falls out of scope.

However, what if user is long-lived? Possibly stored as a reference to another
cached object? Normally (without this commit) the next time you call
cache.Get("user:1"), you'll get a miss and will need to refetch the object; even
though the original user object is still somewhere in memory - you just lost
your reference to it from the cache.

By enabling the Track() configuration flag, and calling TrackingGet() (instead
of Get), the cache will track that the object is in-use and won't GC it (even
if there's great memory pressure (what's the point? something else is holding on
to it anyways). Calling item.Release() will decrement the number of references.
When the count is 0, the item can be pruned from the cache.

The returned value is a TrackedItem which exposes:

- Value() interface{} (to get the actual cached value)
- Release() to release the item back in the cache
2014-02-28 20:10:42 +08:00
Karl Seguin 8a0ef8ae17 fixed file permissions 2013-11-13 13:50:37 +08:00
Karl Seguin 751266c34a Remove Value interface, cache now works against interface{} with the
expiry specified on a Set.

Get no longer returns expired items

Items can now be deleted
2013-10-30 20:18:51 +08:00
Karl Seguin 36e5fae491 use integer counter for promotions instead of time 2013-10-21 19:37:35 +08:00
Karl Seguin b3cbd19186 initial tests 2013-10-19 20:36:33 +08:00
Karl Seguin 97bc65dc6a first commit 2013-10-19 08:56:28 +08:00