LinkedHashMap's hidden (?) features

Recently I discovered two very nice features of the java.util.LinkedHashMap: accessOrder and removeEldestEntry(Entry). These features combined let you implement simple LRU caches in under two minutes.

accessOrder

The accessOrder flag is set when creating the LinkedHashMap instance using the LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder) constructor. This boolean flag specifies how the entries in the map are ordered:


accessOrder=true

The elements are ordered according to their access: When iterating over the map the most recently accessed entry is returned first and the least recently accessed element is returned last. Only the get, put, and putAll methods influence this ordering.

accessOrder=false

The elements are ordered according to their insertion. This is the default if any of the other LinkedHashMap constructors is used. In this ordering read access to the map has no influence on element ordering.



removeEdlestEntry(Entry)

The second feature of interest is the removeEldestEntry(Entry) method. This method is called with the eldest entry whenever an element is added to the map. Eldest means the element which is returned last when iterating over the map. So the notion of eldest is influenced by accessOrder set on the map. The removeEldestElement in its default implementation just returns false to indicate, that nothing should happen. An extension of the LinkedHashMap may overwrite the default implementation to do whatever would be required:


  • If the implementation decides to remove the eldest element for any one reason, say a size limitation, it just returns true and the eldest element is removed from the map

  • The implementation may also decide to modify the map itself in some way or the other. But in this case, the implementation should return false, otherwise the eldest element will still be removed.




A simple LRU Cache

Taking the two features together, a very simple LRU Cache may be implemented in just a few lines of code:


public class LRUCache<K, V> extends LinkedHashMap<K, V> {
private final int limit;
public LRUCache(int limit) {
super(16, 0.75f, true);
this.limit = limit;
}
@Override
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
return size() > limit;
}
}


The mechanism is very easy: The LRUCache(int) constructor initializes the map with the default initial size and load factor and sets the map into accessOrder mode. The removeEldestEntry just checks the current map size (after the addition of a new entry) against the limit and returns true if the limit has been reached.

A real world implementation would of course have to check and handle the limit value on the constructor.


To see a LinkedHashMap based LRU Cache in action, have a look at the BundleResourceCache.BundleResourceMap. This implements a simple entry cache to speed access to OSGi Bundle entries. To not waste memory, the size of the cache is limited.

Kommentare

Dieter hat gesagt…
Felix:

Just saw your post and wanted to point out that I wrote a small extension to the functionality in the Java LinkedHashMap you might find interesting:
LRUCacheMap

Regards,
Dieter
fmeschbe hat gesagt…
Hi Dieter, Thanks for the link. The extensions by a call back mechanism is really interesting. Regards, Felix
Anonym hat gesagt…
Hello,
thanks for posting this, I would add that you should always use the Override annotation when overriding (Java > 1.4):
@Override
protected boolean removeEldestEntry(final Map.Entry eldest) {
return size() > limit;
}
fmeschbe hat gesagt…
Yeah, you are right. Thanks. I updated the post.
Stefan Lotties hat gesagt…
Unfortionately a LinkedHashMap is not reentrant and therefore cannot be used without any external synchronization inside a reallife multithread-accessed environment. I'm sure you knew that. Yet I have to admit that I didn't know of these interesting features of a LinkedHashMap.
fmeschbe hat gesagt…
@Stefan Lotties: Yes. What I really would like to have would be a ConcurrentLinkedHashMap ... But for now we have to implement synchronization in the applcation -- best by actually hiding the existence of the LinkedHashMap inside the cache

Beliebte Posts aus diesem Blog

Ready to serve requests ...

To embed or to inline ?

OSGi DevCon 2013 coming soon