Commit graph

31 commits

Author SHA1 Message Date
Karl Seguin
65573a0cb6 helper makefile 2014-11-13 22:20:23 +07:00
Karl Seguin
cc0395a391 added replace method 2014-11-13 22:20:12 +07:00
Karl Seguin
7e08960075 update readme 2014-11-04 17:24:07 +07:00
Karl Seguin
5e131cc17c Buckets must be a power of 2. Move from % to & for determining the bucket. 2014-11-02 18:09:49 +07:00
Karl Seguin
624c03cd3e delete and deleteall return boolean to indicate if delete found the item 2014-10-27 08:30:48 +07:00
Karl Seguin
0fddc964ec added extend 2014-10-27 08:27:26 +07:00
Karl Seguin
77765a3f11 Get now returns the *Item rather than the item's value. Get no longer actively
purges stale items.

Combining these two changes, CCache can now be used to implement both of
Varnish's grace and saint mode.
2014-10-25 17:15:47 +07:00
Karl Seguin
3a00ce8f0a fixed possible nil panic when item is deleted immediately after being added 2014-10-25 12:24:52 +07:00
Karl Seguin
b0e3fca0f6 fixed formatting 2014-10-25 12:21:10 +07:00
Karl Seguin
0c7492b382 Added layered cache 2014-10-25 12:19:14 +07:00
Karl Seguin
13c50b1ff5 Remove the item's mutex. doPromote can only happen in a single goroutine.
Bucket.set has its own lock which would prevent an item from being accessed
by multiple goroutines.
2014-10-25 09:44:22 +07:00
Karl Seguin
c626aca486 item's RWMutex -> Mutex (which is how it was being used)
item's expires is now an int, rather than a time.Time

Combined, these two changes save around 30 bytes per item.
2014-10-25 08:35:52 +07:00
Karl Seguin
967200d7bc switched from gspec -> expect 2014-10-25 07:46:18 +07:00
Karl Seguin
a81a0f665c Changed some config defaults.
Added documentation
2014-10-14 13:43:34 +07:00
Karl Seguin
d9d6e2b00e This is a sad commit.
How do you decide you need to purge your cache? Relying on runtime.ReadMemStats
sucks for two reasons. First, it's a stop-the-world call, which is pretty bad
in general and down right stupid for a supposedly concurrent-focused package.
Secondly, it only tells you the total memory usage, but most time you really
want to limit the amount of memory the cache itself uses.

Since there's no great way to determine the size of an object, that means users
need to supply the size. One way is to make it so that any cached item satisfies
a simple interface which exposes a Size() method. With this, we can track how
much memory is set put and a delete releases. But it's hard for consumers to
know how much memory they're taking when storing complex object (the entire point
of an in-process cache is to avoid having to serialize the data). Since any Size()
is bound to be a rough guess, we can simplify the entire thing by evicting based
on # of items.

This works really bad when items vary greatly in size (an HTTP cache), but in
a lot of other cases it works great. Furthermore, even for an HTTP cache, given
enough values, it should average out in most cases.

Whatever. This improve performance and should improve the usability of the cache.
It is a pretty big breaking change though.
2014-04-08 23:36:28 +08:00
Karl Seguin
7e109b11cc removed print line to fix #1 2014-03-23 07:53:29 +08:00
Karl Seguin
4456358470 config documentation 2014-03-01 00:44:39 +08:00
Karl Seguin
c1e1fb5933 fixed tests 2014-02-28 23:50:42 +08:00
Karl Seguin
890bb18dbf The cache can now do reference counting so that the LRU algorithm is aware of
long-lived objects and won't clean them up. Oftentimes, the value returned
from a cache hit is short-lived. As a silly example:

	func GetUser(http.responseWrite) {
		user := cache.Get("user:1")
		response.Write(serialize(user))
	}

It's fine if the cache's GC cleans up "user:1" while the user variable has a reference to the
object..the cache's reference is removed and the real GC will clean it up
at some point after the user variable falls out of scope.

However, what if user is long-lived? Possibly stored as a reference to another
cached object? Normally (without this commit) the next time you call
cache.Get("user:1"), you'll get a miss and will need to refetch the object; even
though the original user object is still somewhere in memory - you just lost
your reference to it from the cache.

By enabling the Track() configuration flag, and calling TrackingGet() (instead
of Get), the cache will track that the object is in-use and won't GC it (even
if there's great memory pressure (what's the point? something else is holding on
to it anyways). Calling item.Release() will decrement the number of references.
When the count is 0, the item can be pruned from the cache.

The returned value is a TrackedItem which exposes:

- Value() interface{} (to get the actual cached value)
- Release() to release the item back in the cache
2014-02-28 20:10:42 +08:00
Karl Seguin
af884bb25f Added license 2013-11-17 20:47:28 +08:00
Karl Seguin
fd8650d72b fixed typo 2013-11-13 16:17:18 +08:00
Karl Seguin
92538ee30d refactored shouldPromote 2013-11-13 15:36:01 +08:00
Karl Seguin
8a0ef8ae17 fixed file permissions 2013-11-13 13:50:37 +08:00
Karl Seguin
6720535fab Checking if an item should be promoted because it's new is the uncommon
case which we can optimize out by being more explicit when we create
a new item
2013-11-13 13:46:41 +08:00
Karl Seguin
3fa767d9ff added non-threadsafe Clear (for tests), fixed Fetch 2013-10-31 11:45:22 +08:00
Karl Seguin
ba89971ba8 added Fetch method to get + set on miss 2013-10-30 20:24:43 +08:00
Karl Seguin
751266c34a Remove Value interface, cache now works against interface{} with the
expiry specified on a Set.

Get no longer returns expired items

Items can now be deleted
2013-10-30 20:18:51 +08:00
Karl Seguin
36e5fae491 use integer counter for promotions instead of time 2013-10-21 19:37:35 +08:00
Karl Seguin
7a8102e166 clean up TestValue 2013-10-19 20:45:30 +08:00
Karl Seguin
b3cbd19186 initial tests 2013-10-19 20:36:33 +08:00
Karl Seguin
97bc65dc6a first commit 2013-10-19 08:56:28 +08:00