在 hadoop 中使用 Reducer class

Working With Reducer class in hadoop

我正在构建 map reduce 作业。

问题是比较工作不正常。

如何比较这些字符串?

public void reduce(Text key, Iterable<Text> values,
        Context context) throws IOException, InterruptedException {

    int numItems1 = 0;

    for (Text val1 : values) {
        if(val1.toString()  == "view")   /// if not working
        {
            numItems1 = numItems1+1;
            numItems2 = 0;
            numItems3 = 0;
            result1.set( numItems1 );
            // val1.set(   result1 + "," + numItems2 + "," +   numItems3  );
        }
    }

    val1.set(result1 + ",");
    context.write(key,val1);
}

正如copeg所说,它不依赖Hadoop,也不依赖Reducer。问题来自您的字符串比较。在 Java 中,您应该使用

val1.toString().equals("view")