如果达到内存大小限制,如何从地图中删除元素?
How to remove elements from map if it reaches a memory size limit?
我已经使用 ConcurrentLinkedHashMap
实现了 LRU 缓存。在同一张地图中,如果我的地图达到特定限制,我将清除事件,如下所示。
我有一个 MAX_SIZE
变量,它相当于 3.7 GB,一旦我的地图达到该限制,我就会从我的地图中清除事件。
下面是我的代码:
import java.util.concurrent.ConcurrentMap;
import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
import com.googlecode.concurrentlinkedhashmap.EvictionListener;
// does this really equal to 3.7 GB? can anyone explain this?
public static final int MAX_SIZE = 20000000; //equates to ~3.7GB with assumption that each event is 200 bytes AVG
public static EvictionListener<String, DataObject> listener = new EvictionListener<String, DataObject>() {
public void onEviction(String key, DataObject value) {
deleteEvents();
}
};
public static final ConcurrentMap<String, DataObject> holder = new ConcurrentLinkedHashMap.Builder<String, DataObject>()
.maximumWeightedCapacity(MAX_SIZE).listener(listener).build();
private static void deleteEvents() {
int capacity = MAX_SIZE - (MAX_SIZE * (20 / 100));
if (holder.size() >= capacity) {
int numEventsToEvict = (MAX_SIZE * 20) / 100;
int counter = 0;
Iterator<String> iter = holder.keySet().iterator();
while (iter.hasNext() && counter < numEventsToEvict) {
String address = iter.next();
holder.remove(address);
System.out.println("Purging Elements: " +address);
counter++;
}
}
}
// this method is called every 30 seconds from a single background thread
// to send data to our queue
public void submit() {
if (holder.isEmpty()) {
return;
}
// some other code here
int sizeOfMsg = 0;
Iterator<String> iter = holder.keySet().iterator();
int allowedBytes = MAX_ALLOWED_SIZE - ALLOWED_BUFFER;
while (iter.hasNext() && sizeOfMsg < allowedBytes) {
String key = iter.next();
DataObject temp = holder.get(key);
// some code here
holder.remove(key);
// some code here to send data to queue
}
}
// this holder map is used in below method to add the events into it.
// below method is being called from some other place.
public void addToHolderRequest(String key, DataObject stream) {
holder.put(key, stream);
}
下面是我为此使用的 Maven 依赖项:
<dependency>
<groupId>com.googlecode.concurrentlinkedhashmap</groupId>
<artifactId>concurrentlinkedhashmap-lru</artifactId>
<version>1.4</version>
</dependency>
我不确定这样做是否正确?如果事件平均为 200 字节,这个 MAX_SIZE
真的等于 3.7 GB 吗?有没有更好的方法来做到这一点?我还有一个后台线程,它每 30 秒调用一次 deleteEvents()
方法,同样的后台线程也调用 submit
方法从 holder
映射中提取数据并发送到队列。
所以想法是,在 addToHolderRequest
方法中将事件添加到 holder
映射,然后每 30 秒从后台调用 submit
方法,该方法将通过迭代此方法将数据发送到我们的队列map 然后在提交方法完成后,从将清除元素的同一后台线程调用 deleteEvents()
方法。我 运行 这段代码正在生产中,看起来它没有正确清除事件,我的 holder map 大小不断增加。我有一个 min/max 堆内存设置为 6GB。
- 代替估计 JVM 中对象的大小并使用强引用引用它们,您可以使用 "most often used to implement memory-sensitive caches"(SoftReference). e.g. CacheBuilder.softValues() from google/guava: Google Core Libraries for Java 6+: "Softly-referenced objects will be garbage-collected in a globally least-recently-used manner, in response to memory demand." However, I'd recommend first familiarizing yourself with CachesExplained · google/guava Wiki (specifically the Reference-based Eviction 部分)的软引用。
- 作为使用软引用的调整,您还可以尝试 here 中描述的 "victim caching approach",它使用 "normal cache that evicts to [a] soft cache, and recovers entries on a miss if possible".
- 如果您确定要实际估计对象的大小,请查看 Ehcache and its Sizing Storage Tiers. It has Built-In Sizing Computation and Enforcement 内存受限缓存。
我已经使用 ConcurrentLinkedHashMap
实现了 LRU 缓存。在同一张地图中,如果我的地图达到特定限制,我将清除事件,如下所示。
我有一个 MAX_SIZE
变量,它相当于 3.7 GB,一旦我的地图达到该限制,我就会从我的地图中清除事件。
下面是我的代码:
import java.util.concurrent.ConcurrentMap;
import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
import com.googlecode.concurrentlinkedhashmap.EvictionListener;
// does this really equal to 3.7 GB? can anyone explain this?
public static final int MAX_SIZE = 20000000; //equates to ~3.7GB with assumption that each event is 200 bytes AVG
public static EvictionListener<String, DataObject> listener = new EvictionListener<String, DataObject>() {
public void onEviction(String key, DataObject value) {
deleteEvents();
}
};
public static final ConcurrentMap<String, DataObject> holder = new ConcurrentLinkedHashMap.Builder<String, DataObject>()
.maximumWeightedCapacity(MAX_SIZE).listener(listener).build();
private static void deleteEvents() {
int capacity = MAX_SIZE - (MAX_SIZE * (20 / 100));
if (holder.size() >= capacity) {
int numEventsToEvict = (MAX_SIZE * 20) / 100;
int counter = 0;
Iterator<String> iter = holder.keySet().iterator();
while (iter.hasNext() && counter < numEventsToEvict) {
String address = iter.next();
holder.remove(address);
System.out.println("Purging Elements: " +address);
counter++;
}
}
}
// this method is called every 30 seconds from a single background thread
// to send data to our queue
public void submit() {
if (holder.isEmpty()) {
return;
}
// some other code here
int sizeOfMsg = 0;
Iterator<String> iter = holder.keySet().iterator();
int allowedBytes = MAX_ALLOWED_SIZE - ALLOWED_BUFFER;
while (iter.hasNext() && sizeOfMsg < allowedBytes) {
String key = iter.next();
DataObject temp = holder.get(key);
// some code here
holder.remove(key);
// some code here to send data to queue
}
}
// this holder map is used in below method to add the events into it.
// below method is being called from some other place.
public void addToHolderRequest(String key, DataObject stream) {
holder.put(key, stream);
}
下面是我为此使用的 Maven 依赖项:
<dependency>
<groupId>com.googlecode.concurrentlinkedhashmap</groupId>
<artifactId>concurrentlinkedhashmap-lru</artifactId>
<version>1.4</version>
</dependency>
我不确定这样做是否正确?如果事件平均为 200 字节,这个 MAX_SIZE
真的等于 3.7 GB 吗?有没有更好的方法来做到这一点?我还有一个后台线程,它每 30 秒调用一次 deleteEvents()
方法,同样的后台线程也调用 submit
方法从 holder
映射中提取数据并发送到队列。
所以想法是,在 addToHolderRequest
方法中将事件添加到 holder
映射,然后每 30 秒从后台调用 submit
方法,该方法将通过迭代此方法将数据发送到我们的队列map 然后在提交方法完成后,从将清除元素的同一后台线程调用 deleteEvents()
方法。我 运行 这段代码正在生产中,看起来它没有正确清除事件,我的 holder map 大小不断增加。我有一个 min/max 堆内存设置为 6GB。
- 代替估计 JVM 中对象的大小并使用强引用引用它们,您可以使用 "most often used to implement memory-sensitive caches"(SoftReference). e.g. CacheBuilder.softValues() from google/guava: Google Core Libraries for Java 6+: "Softly-referenced objects will be garbage-collected in a globally least-recently-used manner, in response to memory demand." However, I'd recommend first familiarizing yourself with CachesExplained · google/guava Wiki (specifically the Reference-based Eviction 部分)的软引用。
- 作为使用软引用的调整,您还可以尝试 here 中描述的 "victim caching approach",它使用 "normal cache that evicts to [a] soft cache, and recovers entries on a miss if possible".
- 如果您确定要实际估计对象的大小,请查看 Ehcache and its Sizing Storage Tiers. It has Built-In Sizing Computation and Enforcement 内存受限缓存。