来自 IgniteRDD 的 JavaRDD 需要很长时间

JavaRDD From IgniteRDD taking long time

我已经使用 Spark 创建了一个 Apache Ignite 应用程序

应用程序将两个元组存储到 IgniteRDD

调用 retrieve 时,collect 函数需要超过 3 分钟。

提交的职位数量超过 1000

代码片段:

import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;

import org.apache.ignite.spark.IgniteContext;
import org.apache.ignite.spark.IgniteRDD;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

import scala.Tuple2;

public class CopyOfMainIgnite {

    public static void main(String args[]) {
        SparkConf conf = new SparkConf().setAppName("Demo").setMaster(
                "spark://169.254.228.183:7077");
        System.out.println("Spark conf initialized.");
        JavaSparkContext sc = new JavaSparkContext(conf);
        sc.addJar("./target/IgnitePOC-0.0.1-SNAPSHOT-jar-with-dependencies.jar");
        System.out.println("Spark context initialized.");
        IgniteContext ic = new IgniteContext(sc.sc(),
                "ignite/client-default-config.xml");
        System.out.println("Ignite Context initialized.");
        String cacheName = "demo6";
        save(sc, ic, cacheName);

        retrieve(ic, cacheName);
        ic.close(false);
        sc.close();

    }

    private static void retrieve(IgniteContext ic, String cacheName) {
        System.out.println("Getting IgniteRDD saved.");
        IgniteRDD<String, String> javaIRDDRet = ic.fromCache(cacheName);
        long temp1 = System.currentTimeMillis();

        JavaRDD<Tuple2<String, String>> javardd = javaIRDDRet.toJavaRDD();
        System.out
                .println("Is empty Start Time: " + System.currentTimeMillis());
        System.out.println("javaIRDDRet.isEmpty(): " + javardd.isEmpty());
        System.out.println("Is empty End Time: " + System.currentTimeMillis());
        long temp2 = System.currentTimeMillis();
        long temp3 = System.currentTimeMillis();
        System.out.println("collect and println Start Time: "
                + System.currentTimeMillis());
        javardd.collect().forEach(System.out::println);
        System.out.println("collect and println End Time: "
                + System.currentTimeMillis());
        long temp4 = System.currentTimeMillis();
        System.out.println("Is empty : " + temp1 + " " + temp2
                + " Collect and print: " + temp3 + " " + temp4);
    }

    private static void save(JavaSparkContext sc, IgniteContext ic,
            String cacheName) {
        IgniteRDD<String, String> igniteRDD = ic.fromCache(cacheName);
        System.out.println("IgniteRDD from cache initialized.");
        Map<String, String> tempMap = new HashMap<String, String>();
        tempMap.put("Aditya", "Jain");
        tempMap.put("Pranjal", "Jaju");
        Tuple2<String, String> tempTuple1 = new Tuple2<String, String>(
                "Aditya", "Jain");
        Tuple2<String, String> tempTuple2 = new Tuple2<String, String>(
                "Pranjal", "Jaju");
        List<Tuple2<String, String>> list = new LinkedList<Tuple2<String, String>>();
        list.add(tempTuple1);
        list.add(tempTuple2);
        JavaPairRDD<String, String> jpr = sc.parallelizePairs(list, 4);
        System.out.println("Random RDD saved.");
        igniteRDD.savePairs(jpr.rdd(), false);
        System.out.println("IgniteRDD saved.");
    }
}

所以我的问题是:从 Ignite 获取 2 个 Rdd 元组并在我的进程中收集它们是否需要 3-4 分钟?

或者我对它会以毫秒为单位响应的期望是错误的?

调试后我发现它在 ignite rdd 中创建了 1024 个分区,这导致它触发 1024 个作业。而且我无法控制分区数量。

您可以减少 CacheConfiguration 中的分区数:

<bean class="org.apache.ignite.configuration.CacheConfiguration">
    <property name="affinity">
        <bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
            <property name="partitions" value="32"/>
        </bean>
    </property>
</bean>

您还可以使用 IgniteRDD.sql(..)IgniteRDD.objectSql(..) 方法利用快速索引搜索直接从 Ignite 检索数据。有关如何在 Ignite 中配置 SQL 的详细信息,请参阅此页面:https://apacheignite.readme.io/docs/sql-queries