Hadoop 应用程序找不到 Reducer
Hadoop application cannot find Reducer
我正在尝试创建一个从 Hbase table 读取并将作业结果写入文本文件的 mapreduce 应用程序。我的驱动程序代码如下所示:
Configuration conf = HBaseConfiguration.create();
Job job = Job.getInstance (conf, "mr test");
job.setJarByClass(Driverclass.class);
job.setCombinerClass(reducername.class);
job.setReducerClass(reducername.class);
Scan scan = new Scan();
scan.setCaching(500);
scan.setCacheBlocks(false);
String qualifier = "qualifname"; // comma seperated
String family= "familyname";
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier));
TableMapReduceUtil.initTableMapperJob("tablename",
scan,
mappername.class,
Text.class, Text.class,
job);
调用 initTableMapperJob 时,我得到一个 ClassNotFoundException:class reducername not found。
class 在同一个包内的另一个 java 文件中定义。我使用几乎相同的配置来尝试通常的 wordcount 示例并且工作正常。然后我改变了映射器的类型和它的配置方式,我得到了这个错误。有人可以帮助我吗?
编辑:reducer class 的代码是:
package mr.roadlevelmr;
import java.io.IOException;
import java.util.ArrayList;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
public class reducername extends Reducer <Text, Text, Text, Text>{
private Text result= new Text();
public void reduce (Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
ArrayList<String> means = new ArrayList<String>();
for (Text val : values){
means.add(String.valueOf(val.getBytes()));
}
result.set(newMean(means));
context.write(key, result);
}
您应该按如下方式使用 Map reduce 实用程序:
TableMapReduceUtil.initTableMapperJob("tablename",
scan,
mappername.class,
Text.class, Text.class,
job);Ok think I found the issue!
比添加减速器和组合器
job.setCombinerClass(reducername.class);
job.setReducerClass(reducername.class);
boolean b = job.waitForCompletion(true);
而不是将减速器添加到 table 映射器作业
我正在尝试创建一个从 Hbase table 读取并将作业结果写入文本文件的 mapreduce 应用程序。我的驱动程序代码如下所示:
Configuration conf = HBaseConfiguration.create();
Job job = Job.getInstance (conf, "mr test");
job.setJarByClass(Driverclass.class);
job.setCombinerClass(reducername.class);
job.setReducerClass(reducername.class);
Scan scan = new Scan();
scan.setCaching(500);
scan.setCacheBlocks(false);
String qualifier = "qualifname"; // comma seperated
String family= "familyname";
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier));
TableMapReduceUtil.initTableMapperJob("tablename",
scan,
mappername.class,
Text.class, Text.class,
job);
调用 initTableMapperJob 时,我得到一个 ClassNotFoundException:class reducername not found。
class 在同一个包内的另一个 java 文件中定义。我使用几乎相同的配置来尝试通常的 wordcount 示例并且工作正常。然后我改变了映射器的类型和它的配置方式,我得到了这个错误。有人可以帮助我吗?
编辑:reducer class 的代码是:
package mr.roadlevelmr;
import java.io.IOException;
import java.util.ArrayList;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
public class reducername extends Reducer <Text, Text, Text, Text>{
private Text result= new Text();
public void reduce (Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
ArrayList<String> means = new ArrayList<String>();
for (Text val : values){
means.add(String.valueOf(val.getBytes()));
}
result.set(newMean(means));
context.write(key, result);
}
您应该按如下方式使用 Map reduce 实用程序:
TableMapReduceUtil.initTableMapperJob("tablename",
scan,
mappername.class,
Text.class, Text.class,
job);Ok think I found the issue!
比添加减速器和组合器
job.setCombinerClass(reducername.class);
job.setReducerClass(reducername.class);
boolean b = job.waitForCompletion(true);
而不是将减速器添加到 table 映射器作业