使用 Java 进行硬盘基准测试,得到异常快速的结果

Hard drive benchmarking with Java, getting unreasonably fast results

我写了一段代码来对硬盘进行基准测试。这相当简单:使用 BufferedOutputStream 将一大块字节 (2500 * 10M) 写入磁盘,然后使用 BufferedInputStream 读取它。我每次写 2500 个字节,共 10M 次,以模拟我编写的另一个程序中的条件。还有一个全局变量,"meaningless",它是用读取的字节来计算的——它完全没有意义,只是用来强制执行真正读取和使用字节(避免字节被读取的情况)由于一些优化未阅读)。

代码运行4次,输出结果

这里是:

import java.io.*;

public class DriveTest
{
    public static long meaningless = 0;

    public static String path = "C:\test";

    public static int chunkSize = 2500;

    public static int iterations = 10000000;

    public static void main(String[] args)
    {
        try
        {
            for(int i = 0; i < 4; i++)
            {
                System.out.println("Test " + (i + 1) + ":");
                System.out.println("==================================");

                write();
                read();

                new File(path).delete();

                System.out.println("==================================");
            }
        }
        catch(Exception e)
        {
            e.printStackTrace();
        }
    }

    private static void write() throws Exception
    {
        BufferedOutputStream bos = new BufferedOutputStream(
                                   new FileOutputStream(new File(path)));

        long t1 = System.nanoTime();

        for(int i = 0; i < iterations; i++)
        {
            byte[] data = new byte[chunkSize];

            for(int j = 0; j < data.length; j++)
            {
                data[j] = (byte)(j % 127);
            }

            bos.write(data);
        }

        bos.close();

        long t2 = System.nanoTime();

        double seconds = ((double)(t2 - t1) / 1000000000.0);

        System.out.println("Writing took " + (t2 - t1) + 
                           " ns (" + seconds + " seconds).");

        System.out.println("Write rate " + (((double)chunkSize * 
                           iterations / seconds) / 
                           (1024.0 * 1024.0)) + " MB/s.");
    }

    private static void read() throws Exception
    {
        BufferedInputStream bis = new BufferedInputStream(
                                  new FileInputStream(new File(path)));

        long t1 = System.nanoTime();

        byte[] data;

        for(int i = 0; i < iterations; i++)
        {
            data = new byte[chunkSize];

            bis.read(data);

            meaningless += data[i % chunkSize];
        }

        bis.close();

        long t2 = System.nanoTime();

        System.out.println("meaningless is: " + meaningless + ".");

        double seconds = ((double)(t2 - t1) / 1000000000.0);

        System.out.println("Reading Took " + (t2 - t1) + 
                           " ns, which is " + 
                           seconds + " seconds.");

        System.out.println("Read rate " + (((double)chunkSize * 
                           iterations / seconds) / 
                           (1024.0 * 1024.0)) + " MB/s.");
    }
}

这里的问题是双重的:

  1. 当迭代次数 = 10M(将 ~23GB 写入磁盘)时,常规 7200 RPM 驱动器提供非常快的结果,高于规格:

_

Test 1:
Writing took 148738975163 ns (148.738975163 seconds).
Write rate 160.29327810029918 MB/s.
meaningless is: 1246080000.
Reading Took 139143051529 ns, which is 139.143051529 seconds.
Read rate 171.34781541848795 MB/s.

Test 2:
Writing took 146591885655 ns (146.591885655 seconds).
Write rate 162.64104799270686 MB/s.
meaningless is: 1869120000.
Reading Took 139845492688 ns, which is 139.845492688 seconds.
Read rate 170.48713871206587 MB/s.

Test 3:
Writing took 152049678671 ns (152.049678671 seconds).
Write rate 156.8030798785472 MB/s.
meaningless is: 2492160000.
Reading Took 140152776858 ns, which is 140.152776858 seconds.
Read rate 170.11334662539255 MB/s.

Test 4:
Writing took 151363950081 ns (151.363950081 seconds).
Write rate 157.51344951950355 MB/s.
meaningless is: 3115200000.
Reading Took 139176911081 ns, which is 139.176911081 seconds.
Read rate 171.30612919179143 MB/s.

这似乎很奇怪 - 磁盘实际上可以达到这样的速度吗?我严重怀疑,鉴于经过测试的规格(甚至在 java output/input 流下 - 在我的新手看来 - 不应该是最佳的!)较低:http://hdd.userbenchmark.com/Toshiba-DT01ACA200-2TB/Rating/2736

  1. 当迭代次数设置为 1M (1000000) 时,数字会变得非常疯狂:

_

Test 1:
Writing took 6918084976 ns (6.918084976 seconds).
Write rate 344.6308912490619 MB/s.
meaningless is: 62304000.
Reading Took 2060226375 ns, which is 2.060226375 seconds.
Read rate 1157.244572706543 MB/s.

Test 2:
Writing took 6970893036 ns (6.970893036 seconds).
Write rate 342.0201369756931 MB/s.
meaningless is: 124608000.
Reading Took 2013661185 ns, which is 2.013661185 seconds.
Read rate 1184.0054368508995 MB/s.

Test 3:
Writing took 7140592101 ns (7.140592101 seconds).
Write rate 333.89188981705496 MB/s.
meaningless is: 186912000.
Reading Took 2011346987 ns, which is 2.011346987 seconds.
Read rate 1185.367719456367 MB/s.

Test 4:
Writing took 7140064035 ns (7.140064035 seconds).
Write rate 333.91658384694375 MB/s.
meaningless is: 249216000.
Reading Took 2041787713 ns, which is 2.041787713 seconds.
Read rate 1167.6952387535623 MB/s.

这是什么缓存魔法?? (什么样的缓存可以使 writing 更快??):) 以及如何撤销?我已经读写了2.3GB的文件!如果这确实是问题所在,则需要进行大量缓存。

谢谢

您的测试可能只是读取和写入 OS 页面缓存。小数据大小将完全适合。较大的不会,但会被 OS 异步刷新。您应该尝试使用 OpenOptions.DSYNC 和 SYNC 选项。