如何从 ls 中求和文件大小,例如输出日志与字节、KiB、MiB、GiB
how to sum file size from ls like output log with Bytes, KiB, MiB, GiB
我有一个预计算的类似 ls
的输出(它不是来自实际的 ls
命令),我无法修改或重新计算它。看起来像这样:
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
日志至少可以有GiB
、MiB
、KiB
、Bytes
。可能的值包括零值,或者每个前缀的值 w/wo 逗号:
0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB
类似的方法如下
awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + ; x = ; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'
但在我的情况下无法正常工作:
$ head -n 10 files_sizes.log | awk '{print ,}' | sort | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + ; x = ; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'
0K Bytes
1.1K KiB
201K Bytes
201K Bytes
201K Bytes
201K Bytes
201K Bytes
411K Bytes
590K Bytes
920K Bytes
Total: 3.8M
此输出错误地计算了大小。我期望的输出是输入日志文件的正确总和:
0 Bytes
201 Bytes
201 Bytes
201 Bytes
411 Bytes
201 Bytes
1.1 KiB
201 Bytes
920 Bytes
590 Bytes
Total: 3.95742 KiB
注意
作为 Bytes
总和结果的正确值是
201 * 5 + 590 + 920 = 2926,所以加上 KiB
的总数是
2.857422 + 1.1 = 3,95742 KiB = 4052.400 字节
[更新]
我更新了 KamilCuk and Ted Lyngmo and Walter A 给出几乎相同值的解决方案的结果比较:
$ head -n 10 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
117538 Bytes
$ head -n 1000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
1225857 Bytes
$ head -n 10000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
12087518 Bytes
$ head -n 1000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
77238840381 Bytes
$ head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
2306569381835 Bytes
和
$ head -n 10 files_sizes.log | ./count_files.sh
3.957422 KiB
$ head -n 1000 files_sizes.log | ./count_files.sh
1.168946 MiB
$ head -n 10000 files_sizes.log | ./count_files.sh
11.526325 MiB
$ head -n 1000000 files_sizes.log | ./count_files.sh
71.934024 GiB
$ head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB
和
(head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8
哪里
2.097807 TiB = 2.3065631893 TB = 2306569381835 字节
在计算上,我比较了所有三种解决方案的速度:
$ time head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB
real 2m7.956s
user 2m10.023s
sys 0m1.696s
$ time head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
2306569381835 Bytes
real 4m12.896s
user 5m45.750s
sys 0m4.026s
$ time (head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8
real 4m31.249s
user 6m40.072s
sys 0m4.252s
让我给你一个更好的方法来使用 ls
:不要将它用作命令,而是用作 find
开关:
find . -maxdepth 1 -ls
这个 returns 文件大小统一,如 find
的联机帮助页中所述,这使得计算起来容易得多。
祝你好运
如所述输入:
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
您可以使用 table 个您希望能够解码的单位:
BEGIN {
unit["Bytes"] = 1;
unit["kB"] = 10**3;
unit["MB"] = 10**6;
unit["GB"] = 10**9;
unit["TB"] = 10**12;
unit["PB"] = 10**15;
unit["EB"] = 10**18;
unit["ZB"] = 10**21;
unit["YB"] = 10**24;
unit["KB"] = 1024;
unit["KiB"] = 1024**1;
unit["MiB"] = 1024**2;
unit["GiB"] = 1024**3;
unit["TiB"] = 1024**4;
unit["PiB"] = 1024**5;
unit["EiB"] = 1024**6;
unit["ZiB"] = 1024**7;
unit["YiB"] = 1024**8;
}
然后在主循环中总结一下:
{
if( in unit) total += * unit[];
else printf("ERROR: Can't decode unit at line %d: %s\n", NR, [=12=]);
}
并在最后打印结果:
END {
binaryunits[0] = "Bytes";
binaryunits[1] = "KiB";
binaryunits[2] = "MiB";
binaryunits[3] = "GiB";
binaryunits[4] = "TiB";
binaryunits[5] = "PiB";
binaryunits[6] = "EiB";
binaryunits[7] = "ZiB";
binaryunits[8] = "YiB";
for(i = 8;; --i) {
if(total >= 1024**i || i == 0) {
printf("%.3f %s\n", total/(1024**i), binaryunits[i]);
break;
}
}
}
输出:
3.957 KiB
请注意,您可以将 she-bang 添加到 awk-script 的开头,这样就可以 运行 它自己,这样您就不必将其放入一个 bash 脚本:
#!/usr/bin/awk -f
您可以在将输入发送到 bc
之前解析输入:
echo "0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" |
sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;
s/GiB/* 1024 * 1024 * 1024/; s/$/ + /' |
tr -d '\n' |
sed 's/+ $/\n/' |
bc
当你的 sed
不支持 \n
时,你可以尝试将 '\n' 替换为真正的换行符,例如
sed 's/+ $/
/'
或者在解析后添加一个echo
(并将最后一个sed
的一部分移动到第一个命令中以删除最后一个+
)
(echo "0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;
s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
使用numfmt
转换这些数字。
cat <<EOF |
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
2019-02-17 14:43:10 3.9 KiB folder/file9.txt
2019-02-17 14:43:10 2.7 MiB folder/file9.txt
2019-02-17 14:43:10 1.3 GiB folder/file9.txt
EOF
# extract 3rd and 4th column
tr -s ' ' | cut -d' ' -f3,4 |
# Remove space, remove "Bytes", remove "B"
sed 's/ //; s/Bytes//; s/B//' |
# convert to numbers
numfmt --from=auto |
# sum
awk '{s+=}END{print s}'
输出总和。
@KamilCuk 给 . Based on his answer, here is an alternative command which uses a single awk call wrapping numfmt
with a two-way pipe 的好主意。它需要最新版本的 GNU awk(5.0.1 可以,4.1.4 不稳定,两者之间未测试)。
LC_NUMERIC=C gawk '
BEGIN {
conv = "numfmt --from=auto"
PROCINFO[conv, "pty"] = 1
}
{
sub(/B.*/, "", )
print |& conv
conv |& getline val
sum += val
}
END { print sum }
' input
备注
LC_NUMERIC=C
(bash/ksh/zsh) 用于在使用 non-english 语言环境的系统上的可移植性。
PROCINFO[conv, "pty"] = 1
让 numfmt
的输出在每一行刷新(以避免解除锁定)。
我有一个预计算的类似 ls
的输出(它不是来自实际的 ls
命令),我无法修改或重新计算它。看起来像这样:
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
日志至少可以有GiB
、MiB
、KiB
、Bytes
。可能的值包括零值,或者每个前缀的值 w/wo 逗号:
0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB
类似的方法如下
awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + ; x = ; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'
但在我的情况下无法正常工作:
$ head -n 10 files_sizes.log | awk '{print ,}' | sort | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + ; x = ; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'
0K Bytes
1.1K KiB
201K Bytes
201K Bytes
201K Bytes
201K Bytes
201K Bytes
411K Bytes
590K Bytes
920K Bytes
Total: 3.8M
此输出错误地计算了大小。我期望的输出是输入日志文件的正确总和:
0 Bytes
201 Bytes
201 Bytes
201 Bytes
411 Bytes
201 Bytes
1.1 KiB
201 Bytes
920 Bytes
590 Bytes
Total: 3.95742 KiB
注意
作为 Bytes
总和结果的正确值是
201 * 5 + 590 + 920 = 2926,所以加上 KiB
的总数是
2.857422 + 1.1 = 3,95742 KiB = 4052.400 字节
[更新]
我更新了 KamilCuk and Ted Lyngmo and Walter A 给出几乎相同值的解决方案的结果比较:
$ head -n 10 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
117538 Bytes
$ head -n 1000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
1225857 Bytes
$ head -n 10000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
12087518 Bytes
$ head -n 1000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
77238840381 Bytes
$ head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
2306569381835 Bytes
和
$ head -n 10 files_sizes.log | ./count_files.sh
3.957422 KiB
$ head -n 1000 files_sizes.log | ./count_files.sh
1.168946 MiB
$ head -n 10000 files_sizes.log | ./count_files.sh
11.526325 MiB
$ head -n 1000000 files_sizes.log | ./count_files.sh
71.934024 GiB
$ head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB
和
(head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8
哪里
2.097807 TiB = 2.3065631893 TB = 2306569381835 字节
在计算上,我比较了所有三种解决方案的速度:
$ time head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB
real 2m7.956s
user 2m10.023s
sys 0m1.696s
$ time head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=}END{print s " Bytes"}'
2306569381835 Bytes
real 4m12.896s
user 5m45.750s
sys 0m4.026s
$ time (head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8
real 4m31.249s
user 6m40.072s
sys 0m4.252s
让我给你一个更好的方法来使用 ls
:不要将它用作命令,而是用作 find
开关:
find . -maxdepth 1 -ls
这个 returns 文件大小统一,如 find
的联机帮助页中所述,这使得计算起来容易得多。
祝你好运
如所述输入:
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
您可以使用 table 个您希望能够解码的单位:
BEGIN {
unit["Bytes"] = 1;
unit["kB"] = 10**3;
unit["MB"] = 10**6;
unit["GB"] = 10**9;
unit["TB"] = 10**12;
unit["PB"] = 10**15;
unit["EB"] = 10**18;
unit["ZB"] = 10**21;
unit["YB"] = 10**24;
unit["KB"] = 1024;
unit["KiB"] = 1024**1;
unit["MiB"] = 1024**2;
unit["GiB"] = 1024**3;
unit["TiB"] = 1024**4;
unit["PiB"] = 1024**5;
unit["EiB"] = 1024**6;
unit["ZiB"] = 1024**7;
unit["YiB"] = 1024**8;
}
然后在主循环中总结一下:
{
if( in unit) total += * unit[];
else printf("ERROR: Can't decode unit at line %d: %s\n", NR, [=12=]);
}
并在最后打印结果:
END {
binaryunits[0] = "Bytes";
binaryunits[1] = "KiB";
binaryunits[2] = "MiB";
binaryunits[3] = "GiB";
binaryunits[4] = "TiB";
binaryunits[5] = "PiB";
binaryunits[6] = "EiB";
binaryunits[7] = "ZiB";
binaryunits[8] = "YiB";
for(i = 8;; --i) {
if(total >= 1024**i || i == 0) {
printf("%.3f %s\n", total/(1024**i), binaryunits[i]);
break;
}
}
}
输出:
3.957 KiB
请注意,您可以将 she-bang 添加到 awk-script 的开头,这样就可以 运行 它自己,这样您就不必将其放入一个 bash 脚本:
#!/usr/bin/awk -f
您可以在将输入发送到 bc
之前解析输入:
echo "0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" |
sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;
s/GiB/* 1024 * 1024 * 1024/; s/$/ + /' |
tr -d '\n' |
sed 's/+ $/\n/' |
bc
当你的 sed
不支持 \n
时,你可以尝试将 '\n' 替换为真正的换行符,例如
sed 's/+ $/
/'
或者在解析后添加一个echo
(并将最后一个sed
的一部分移动到第一个命令中以删除最后一个+
)
(echo "0 Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;
s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
使用numfmt
转换这些数字。
cat <<EOF |
2016-10-14 14:52:09 0 Bytes folder/
2020-04-18 05:19:04 201 Bytes folder/file1.txt
2019-10-16 00:32:44 201 Bytes folder/file2.txt
2019-08-26 06:29:46 201 Bytes folder/file3.txt
2020-07-08 16:13:56 411 Bytes folder/file4.txt
2020-04-18 03:03:34 201 Bytes folder/file5.txt
2019-10-16 08:27:11 1.1 KiB folder/file6.txt
2019-10-16 10:13:52 201 Bytes folder/file7.txt
2019-10-16 08:44:35 920 Bytes folder/file8.txt
2019-02-17 14:43:10 590 Bytes folder/file9.txt
2019-02-17 14:43:10 3.9 KiB folder/file9.txt
2019-02-17 14:43:10 2.7 MiB folder/file9.txt
2019-02-17 14:43:10 1.3 GiB folder/file9.txt
EOF
# extract 3rd and 4th column
tr -s ' ' | cut -d' ' -f3,4 |
# Remove space, remove "Bytes", remove "B"
sed 's/ //; s/Bytes//; s/B//' |
# convert to numbers
numfmt --from=auto |
# sum
awk '{s+=}END{print s}'
输出总和。
@KamilCuk 给 numfmt
with a two-way pipe 的好主意。它需要最新版本的 GNU awk(5.0.1 可以,4.1.4 不稳定,两者之间未测试)。
LC_NUMERIC=C gawk '
BEGIN {
conv = "numfmt --from=auto"
PROCINFO[conv, "pty"] = 1
}
{
sub(/B.*/, "", )
print |& conv
conv |& getline val
sum += val
}
END { print sum }
' input
备注
LC_NUMERIC=C
(bash/ksh/zsh) 用于在使用 non-english 语言环境的系统上的可移植性。PROCINFO[conv, "pty"] = 1
让numfmt
的输出在每一行刷新(以避免解除锁定)。