Grizzly 管道泄漏 - 我做错了什么?

Grizzly pipe leak - what am i doing wrong?

我写了下面的测试代码:

@Test
public void testLeakWithGrizzly() throws Throwable {
    ExecutorService executor = Executors.newFixedThreadPool(N_THREADS);
    Set<Future<Void>> futures = new HashSet<>();
    InetSocketAddress inetSocketAddress = new InetSocketAddress(localhostAddress, 111);
    for (int i = 0; i < N_THREADS; i++) {
        Future<Void> future = executor.submit(new GrizzlyConnectTask(inetSocketAddress, requests, bindFailures, successfulOpens, failedOpens, successfulCloses, failedCloses));
        futures.add(future);
    }
    for (Future<Void> future : futures) {
        future.get(); //block
    }
    Thread.sleep(1000); //let everything calm down
    reporter.report();
    throw causeOfDeath;
}    
private static class GrizzlyConnectTask implements Callable<Void> {
    private final InetSocketAddress address;
    private final Meter requests;
    private final Meter bindFailures;
    private final Counter successfulOpens;
    private final Counter failedOpens;
    private final Counter successfulCloses;
    private final Counter failedCloses;

    public GrizzlyConnectTask(InetSocketAddress address, Meter requests, Meter bindFailures, Counter successfulOpens, Counter failedOpens, Counter successfulCloses, Counter failedCloses) {
        this.address = address;
        this.requests = requests;
        this.bindFailures = bindFailures;
        this.successfulOpens = successfulOpens;
        this.failedOpens = failedOpens;
        this.successfulCloses = successfulCloses;
        this.failedCloses = failedCloses;
    }

    @Override
    public Void call() throws Exception {
        while (!die) {
            TCPNIOTransport transport = null;
            boolean opened = false;
            try {
                transport = TCPNIOTransportBuilder.newInstance().build();
                transport.start();
                transport.connect(address).get(); //block
                opened = true;
                successfulOpens.inc(); //successful open
                requests.mark();
            } catch (Throwable t) {
                //noinspection ThrowableResultOfMethodCallIgnored
                Throwable root = getRootCause(t);
                if (root instanceof BindException) {
                    bindFailures.mark(); //ephemeral port exhaustion.
                    continue;
                }
                causeOfDeath = t;
                die = true;
            } finally {
                if (!opened) {
                    failedOpens.inc();
                }
                if (transport != null) {
                    try {
                        transport.shutdown().get(); //block
                        successfulCloses.inc(); //successful close
                    } catch (Throwable t) {
                        failedCloses.inc();
                        System.err.println("while trying to close transport");
                        t.printStackTrace();
                    }
                } else {
                    //no transport == successful close
                    successfulCloses.inc();
                }
            }
        }
        return null;
    }
}

在我的 linux 笔记本电脑上,这会在大约 5 分钟内崩溃,但出现以下异常:

java.io.IOException: Too many open files
    at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
    at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:130)
    at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:68)
    at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
    at org.glassfish.grizzly.nio.Selectors.newSelector(Selectors.java:62)
    at org.glassfish.grizzly.nio.SelectorRunner.create(SelectorRunner.java:109)
    at org.glassfish.grizzly.nio.NIOTransport.startSelectorRunners(NIOTransport.java:256)
    at org.glassfish.grizzly.nio.NIOTransport.start(NIOTransport.java:475)
    at net.radai.LeakTest$GrizzlyConnectTask.call(LeakTest.java:137)
    at net.radai.LeakTest$GrizzlyConnectTask.call(LeakTest.java:111)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

success/fail 计数器如下所示:

-- Counters --------------------------------------------------------------------
failedCloses
             count = 0
failedOpens
             count = 40999
successfulCloses
             count = 177177
successfulOpens
             count = 136178

-- Meters ----------------------------------------------------------------------
bindFailures
             count = 40998
         mean rate = 153.10 events/second
     1-minute rate = 144.61 events/second
     5-minute rate = 91.12 events/second
    15-minute rate = 39.56 events/second
requests
             count = 136178
         mean rate = 508.54 events/second
     1-minute rate = 547.38 events/second
     5-minute rate = 442.76 events/second
    15-minute rate = 391.53 events/second

这告诉我:

完整代码在 github 此处 - https://github.com/radai-rosenblatt/oncrpc4j-playground/blob/master/src/test/java/net/radai/LeakTest.java

所以,我是不是误用了 grizzly API,或者这是一个真正的漏洞? (注意 - 我使用的是 grizzly 2.3.12,我知道这不是最新的。升级需要说服人们,这就是为什么我想肯定这不是我这边的用户错误)

EDIT - 即使没有抛出任何东西,这个东西也会泄漏。切回单线程并在其中放置 2 毫秒的睡眠,但在 50 分钟内仍然会泄漏 800 个管道。

我在灰熊的内心深处发现了问题。这是内部多线程问题(竞争条件)。文件描述符泄漏 sun.nio.ch.EPollSelectorImpl class。每个实例包含 3 个文件描述符(每个管道 2 个,epoll_create 系统调用 1 个)。 Grizzly 在 class SelectorRunner:

中发送 close/shutdown
    public synchronized void stop() {
        stateHolder.set(State.STOPPING);
        wakeupSelector();

        // we prefer Selector thread shutdown selector
        // but if it's not running - do that ourselves.
        if (runnerThreadActivityCounter.compareAndSet(0, -1)) {
             // The thread is not running
            shutdownSelector();
        }
    }

通常一切都很好,但有时选择器永远不会醒来。唤醒方法通过本机方法发送中断sun.nio.ch.EPollArrayWrapper#interrupt(int)。它有简单的实现:

JNIEXPORT void JNICALL
Java_sun_nio_ch_EPollArrayWrapper_interrupt(JNIEnv *env, jobject this, int fd)
{
    int fakebuf[1];
    fakebuf[0] = 1;
    if (write(fd, fakebuf, 1) < 0) {
        JNU_ThrowIOExceptionWithLastError(env,"write to interrupt fd failed");
    }
}

所以它只发送一个字节来唤醒等待的选择器。但是您在创建后立即关闭传输。这种情况在现实生活中很少见,但在您的测试用例中经常发生。有时 grizzly 在关闭和 wakeup/interrupt 后调用 NIOConnection.enableIOEvent。我认为在这种情况下,选择器永远不会醒来,也永远不会释放文件描述符。

目前我只能针对这种情况建议修补程序:使用定时器任务在超时后直接调用 selector.close

//hotfix code bellow
private static final Timer timer = new Timer();
//hotfix code above
protected synchronized void stopSelectorRunners() {
    if (selectorRunners == null) {
        return;
    }

    for (int i = 0; i < selectorRunners.length; i++) {
        SelectorRunner runner = selectorRunners[i];
        if (runner != null) {
            runner.stop();
            //hotfix code below
            final Selector selector = runner.getSelector();
            if(selector !=null) {
                timer.schedule(new TimerTask() {
                    @Override
                    public void run() {
                        try {
                            selector.close();
                        } catch (IOException e) {
                        }
                    }
                }, 100);
            }
            //hotfix code above
            selectorRunners[i] = null;
        }
    }

    selectorRunners = null;
}

将此添加到 org.glassfish.grizzly.nio.NIOTransport#stopSelectorRunners

后我可以阻止泄漏

我们在 Grizzly 中发现了实际的潜在问题并已修复它。

问题的根源是,根据测试用例,Transport.stop() 在执行 SelectorRunner.run() 的足够早的时间点被调用,这会导致 运行 方法提前终止(由于此时 StateHolder 处于停止状态)。

此外,因为 SelectorRunner.run() CASs 选择器 activity 在 运行() 方法开始时状态改变,调用 Transport.stop() 的线程看到选择器处于活动状态。由于这两个条件,SelectorRunner.shutdownSelector() 永远不会被调用,因此我们会泄漏选择器。

该修复程序将在今晚的夜间版本中提供。