由于预写日志过多导致 ArangoDB 超时

ArangoDB timeout due to too many write ahead logs

我正在尝试重新启动我的 ArangoDB 实例,但我一直超时,我认为这是因为 WAL 日志文件的重播。

起初有 2725 个文件,现在有 2701 个。我让 Arango 重播所有文件(如下所示),但我仍然超时。

2018-11-09T10:30:11Z [2285] INFO replaying WAL logfile '/var/lib/arangodb3/journals/logfile-2668165691.db' (2700 of 2701)
2018-11-09T10:30:11Z [2285] INFO replaying WAL logfile '/var/lib/arangodb3/journals/logfile-2668552250.db' (2701 of 2701)
2018-11-09T10:30:11Z [2285] INFO WAL recovery finished successfully

当我重新启动时,服务挂在这里:

2018-11-09T10:41:34Z [2233] INFO using storage engine mmfiles
2018-11-09T10:41:34Z [2233] INFO {syscall} file-descriptors (nofiles) hard limit is 131072, soft limit is 131072
2018-11-09T10:41:34Z [2233] INFO Authentication is turned on (system only), authentication for unix sockets is turned on

两个问题:

我将日志文件移动到另一个目录,然后我可以重新启动 ArangoDB。当心,我认为这会导致数据丢失。