来自 Kafka 的 Spark Streaming 以及与 Memsql 记录的比较(计数不正确)

Spark Streaming from Kafka and comparison with records of Memsql (count is not coming proper)

我们正在从 Kafka 获取记录,我们正在 Spark 流中从 Kafka 获取卡号,并从 Memsql 记录中执行 Kafka 卡号比较,并通过对卡号进行分组来选择计数和卡号。但是在 Spark Streaming 中计数不正确

例如 Count in Memsql 当我们执行查询时,它会在 memsql 命令提示符

中给出以下输出
memsql> select card_number,count(*) from cardnumberalert5 where 
inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group 
by card_number;
+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |
| 6011255715328120 |        4 |
| 4532133676538232 |        2 |
| 6011614607071620 |        2 |
| 4024007117099605 |        2 |
| 347138718258304  |        4 |
+------------------+----------+

我们注意到 Spark Streaming 中的计数正在分发

例如Memsql当我们从memsql命令提示符

执行时输出
+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |

当在 Spark Streaming 中执行相同的 sql 时,它会将输出打印为

RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1

此处计数显示为 2,但我们得到 2 条计数为 1 的卡号记录

在 Spark Streaming 中打印输出

RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2

Spark 流媒体程序

class SparkKafkaConsumer11(val ssc : StreamingContext,val sc : SparkContext,val spark : org.apache.spark.sql.SparkSession, val topics : Array[String], val kafkaParam : scala.collection.immutable.Map[String,Object]) {

 val stream = KafkaUtils.createDirectStream[String, String](
            ssc,
            PreferConsistent,
            Subscribe[String, String](topics, kafkaParam)
          )

  val recordStream = stream.map(record => (record.value)) // Take the value only from the key,value pair for processing

   recordStream.foreachRDD{rdd =>

val brokers = "174.24.154.244:9092" // Specify the BROKER
val props = new HashMap[String, Object]()
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.CLIENT_ID_CONFIG,"SparkKafkaConsumer__11")
val producer = new KafkaProducer[String,String](props)

val result = spark.read
            .format("com.memsql.spark.connector")
            .options(Map("query" -> ("select card_number,count(*) from cardnumberalert5 where inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group by card_number"),"database" -> "fraud"))
            .load()

val record = rdd.map(line => line.split("\|")) //Split the record and create a array of it.

 record.collect().foreach{recordRDD =>
    val now1 = System.currentTimeMillis

    val now = new java.sql.Timestamp(now1)
    val cardnumber_kafka = recordRDD(13).toString
    val sessionID = recordRDD(1).toString
    println("RECORDS FOUNDS ****************************************")
    println("CARDNUMBER KAFKA ############### "+cardnumber_kafka)

    result.collect().foreach{t => 

      val resm1 = t.getAs[String]("card_number")
      println("CARDNUMBER MEMSQL ############### "+resm1)
      val resm2 = t.getAs[Long]("count(*)")
      println("COUNT MEMSQL ############### "+resm2)

      if(resm1.equals(cardnumber_kafka)){
        if(resm2 > 2){
          println("INSIDE IF CONDITION FOR MORE THAN 3 COUNT"+now)
          val messageToKafka = "---- THIRD OR MORE OCCURANCE  ---- "+cardnumber_kafka
          val message=new ProducerRecord[String, String]("output1",0,sessionID,messageToKafka)
          try {
            producer.send(message)

          } catch {
              case e: Exception =>
              e.printStackTrace
              System.exit(1)
          }
        }
      }

    }

}


producer.close()

}

}

不确定如何解决,非常感谢任何建议或帮助

提前致谢

我们能够通过在 Spark 配置中设置以下 属性 来解决这个问题。

代码:

 .set("spark.memsql.disablePartitionPushdown","true")