为什么LRUCache的初始容量设置为(MAX_ENTRIES + 1)?
Why is initial capacity set to (MAX_ENTRIES + 1) in LRUCache?
在搜索 Java 的 LRU 缓存实现时,遇到了两个具有相似实现的独立帖子,并且都在初始化 LinkedHashMap 的初始容量 = MAX_ENTRIES+1 [例如new LinkedHashMap(MAX_ENTRIES+1, .75F, true)
]
将初始容量设置为 MAX_ENTRIES+1 的 is/are 原因是什么?
参考帖子:
How would you implement an LRU cache in Java?
Easy, simple to use LRU cache in java
坦率地说,因为他们不知道自己在做什么。
LinkedHashMap
的文档指定容量和负载因子的详细信息与HashMap
完全相同,HashMap
指定
An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
映射的"capacity"是哈希table桶的数量,不是映射中允许的条目数。
因此,您描述的 LinkedHashMap
的散列 table 将在添加 (MAX_ENTRIES + 1) * 0.75
条目后调整大小,这基本上是 MAX_ENTRIES
的四分之三.
我怀疑他们正在尝试确保地图有足够的空间容纳一个额外的条目,这样地图在插入新条目和驱逐之间不会调整大小最早的条目,但这实际上不是它的工作原理。
在搜索 Java 的 LRU 缓存实现时,遇到了两个具有相似实现的独立帖子,并且都在初始化 LinkedHashMap 的初始容量 = MAX_ENTRIES+1 [例如new LinkedHashMap(MAX_ENTRIES+1, .75F, true)
]
将初始容量设置为 MAX_ENTRIES+1 的 is/are 原因是什么?
参考帖子:
How would you implement an LRU cache in Java?
Easy, simple to use LRU cache in java
坦率地说,因为他们不知道自己在做什么。
LinkedHashMap
的文档指定容量和负载因子的详细信息与HashMap
完全相同,HashMap
指定
An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
映射的"capacity"是哈希table桶的数量,不是映射中允许的条目数。
因此,您描述的 LinkedHashMap
的散列 table 将在添加 (MAX_ENTRIES + 1) * 0.75
条目后调整大小,这基本上是 MAX_ENTRIES
的四分之三.
我怀疑他们正在尝试确保地图有足够的空间容纳一个额外的条目,这样地图在插入新条目和驱逐之间不会调整大小最早的条目,但这实际上不是它的工作原理。