运行 1TB teragen 为什么没有reducer?

Why there is no reducer when running 1TB teragen?

我是 运行 hadoop 的 terasort 基准测试,使用以下命令:

jar /Users/karan.verma/Documents/backups/h/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen -Dmapreduce.job.maps=100 1t random-data

并为 100 个地图任务打印了以下日志:

18/03/27 13:06:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/27 13:06:04 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
18/03/27 13:06:05 INFO terasort.TeraSort: Generating -727379968 using 100
18/03/27 13:06:05 INFO mapreduce.JobSubmitter: number of splits:100
18/03/27 13:06:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1522131782827_0001
18/03/27 13:06:06 INFO impl.YarnClientImpl: Submitted application application_1522131782827_0001
18/03/27 13:06:06 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1522131782827_0001/
18/03/27 13:06:06 INFO mapreduce.Job: Running job: job_1522131782827_0001
18/03/27 13:06:16 INFO mapreduce.Job: Job job_1522131782827_0001 running in uber mode : false
18/03/27 13:06:16 INFO mapreduce.Job:  map 0% reduce 0%
18/03/27 13:06:29 INFO mapreduce.Job:  map 2% reduce 0%
18/03/27 13:06:31 INFO mapreduce.Job:  map 3% reduce 0%
18/03/27 13:06:32 INFO mapreduce.Job:  map 5% reduce 0%

....
18/03/27 13:09:27 INFO mapreduce.Job:  map 100% reduce 0%

这是打印在控制台上的最终计数器:

18/03/27 13:09:29 INFO mapreduce.Job: Counters: 30
File System Counters
    FILE: Number of bytes read=0
    FILE: Number of bytes written=10660990
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
    HDFS: Number of bytes read=8594
    HDFS: Number of bytes written=0
    HDFS: Number of read operations=400
    HDFS: Number of large read operations=0
    HDFS: Number of write operations=200
Job Counters 
    Launched map tasks=100
    Other local map tasks=100
    Total time spent by all maps in occupied slots (ms)=983560
    Total time spent by all reduces in occupied slots (ms)=0
    Total time spent by all map tasks (ms)=983560
    Total vcore-milliseconds taken by all map tasks=983560
    Total megabyte-milliseconds taken by all map tasks=1007165440
Map-Reduce Framework
    Map input records=0
    Map output records=0
    Input split bytes=8594
    Spilled Records=0
    Failed Shuffles=0
    Merged Map outputs=0
    GC time elapsed (ms)=9746
    CPU time spent (ms)=0
    Physical memory (bytes) snapshot=0
    Virtual memory (bytes) snapshot=0
    Total committed heap usage (bytes)=11220811776
File Input Format Counters 
    Bytes Read=0
File Output Format Counters 
    Bytes Written=0

这是作业计划的输出:

请问为什么没有reduce任务?

您的 运行 命令表明您正在 运行ning teragen 而不是 terasortteragen 只是生成数据,然后您可以将其用于 terasort,因此不需要缩减程序。

到 运行 terasort 对您刚刚生成的数据,运行:

hadoop jar /Users/karan.verma/Documents/backups/h/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar terasort random-data terasort-output

然后您应该会看到减速器。

执行 teragen 时没有 reduce 任务 运行。这是文档:

TeraGen will run map tasks to generate the data and will not run any reduce tasks. The default number of map task is defined by the "mapreduce.job.maps=2" param. It's the only purpose here is to generate the 1TB of random data in the following format " 10 bytes key | 2 bytes break | 32 bytes acsii/hex | 4 bytes break | 48 bytes filler | 4 bytes break | \r\n".