2G 是 Linux 上 coredump 文件的限制大小吗?
Is 2G the limit size of coredump file on Linux?
我的 OS 是 Arch Linux
。有coredump的时候,我尝试用gdb调试:
$ coredumpctl gdb 1621
......
Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4
Message: Process 1621 (runTests) of user 1014 dumped core.
Stack trace of thread 1621:
#0 0x00007ff1c0fcfa10 n/a (n/a)
GNU gdb (GDB) 7.12.1
......
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.
我检查 /var/tmp/coredump-28KzRc
文件:
$ ls -alth /var/tmp/coredump-28KzRc
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc
2G
是 Linux 上的核心转储文件的大小限制吗?因为我认为我的 /var/tmp
有足够的磁盘 space 可以使用:
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 32G 0 32G 0% /dev
run 32G 3.1M 32G 1% /run
/dev/sda2 229G 86G 132G 40% /
tmpfs 32G 708M 31G 3% /dev/shm
tmpfs 32G 0 32G 0% /sys/fs/cgroup
tmpfs 32G 957M 31G 3% /tmp
/dev/sda1 511M 33M 479M 7% /boot
/dev/sda3 651G 478G 141G 78% /home
P.S。 “ulimit -a
”输出:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257039
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 257039
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
更新:/etc/systemd/coredump.conf
文件:
$ cat coredump.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See coredump.conf(5) for details.
[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
Is 2G the limit size of coredump file on Linux?
没有。我经常处理大于 4GiB 的核心转储。
ulimit -a
core file size (blocks, -c) unlimited
这会告诉您您在此 shell 中的 当前 限制。它告诉您 runTests
运行 所在的环境。该进程可能正在通过 setrlimit(2)
设置自己的限制,或者其父进程可能正在为其设置限制。
您可以修改 runTest 以使用 getrlimit(2)
打印其当前限制,并在进程运行时查看它的实际情况。
P.S。仅仅因为 core
被截断并不意味着它完全没用(尽管通常是这样)。至少,您应该尝试 GDB where
命令。
@n.m。是正确的。
(1)修改/etc/systemd/coredump.conf
文件:
[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G
(2) 重新加载 systemd 的配置:
# systemctl daemon-reload
注意这只会对新生成的核心转储文件生效。
我的 OS 是 Arch Linux
。有coredump的时候,我尝试用gdb调试:
$ coredumpctl gdb 1621
......
Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4
Message: Process 1621 (runTests) of user 1014 dumped core.
Stack trace of thread 1621:
#0 0x00007ff1c0fcfa10 n/a (n/a)
GNU gdb (GDB) 7.12.1
......
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.
我检查 /var/tmp/coredump-28KzRc
文件:
$ ls -alth /var/tmp/coredump-28KzRc
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc
2G
是 Linux 上的核心转储文件的大小限制吗?因为我认为我的 /var/tmp
有足够的磁盘 space 可以使用:
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 32G 0 32G 0% /dev
run 32G 3.1M 32G 1% /run
/dev/sda2 229G 86G 132G 40% /
tmpfs 32G 708M 31G 3% /dev/shm
tmpfs 32G 0 32G 0% /sys/fs/cgroup
tmpfs 32G 957M 31G 3% /tmp
/dev/sda1 511M 33M 479M 7% /boot
/dev/sda3 651G 478G 141G 78% /home
P.S。 “ulimit -a
”输出:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257039
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 257039
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
更新:/etc/systemd/coredump.conf
文件:
$ cat coredump.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See coredump.conf(5) for details.
[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
Is 2G the limit size of coredump file on Linux?
没有。我经常处理大于 4GiB 的核心转储。
ulimit -a
core file size (blocks, -c) unlimited
这会告诉您您在此 shell 中的 当前 限制。它告诉您 runTests
运行 所在的环境。该进程可能正在通过 setrlimit(2)
设置自己的限制,或者其父进程可能正在为其设置限制。
您可以修改 runTest 以使用 getrlimit(2)
打印其当前限制,并在进程运行时查看它的实际情况。
P.S。仅仅因为 core
被截断并不意味着它完全没用(尽管通常是这样)。至少,您应该尝试 GDB where
命令。
@n.m。是正确的。
(1)修改/etc/systemd/coredump.conf
文件:
[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G
(2) 重新加载 systemd 的配置:
# systemctl daemon-reload
注意这只会对新生成的核心转储文件生效。