如何在 AWS EMR 上将 graphframes 与 pyspark 结合使用?

How can I use graphframes with pyspark on AWS EMR?

我正在尝试在 AWS EMR 上的 Jupyter Notebook 中使用 pyspark 中的 graphframes 包(使用 Sagemaker 和 sparkmagic)。我尝试在 AWS 控制台中创建 EMR 集群时添加配置选项:

[{"classification":"spark-defaults", "properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}, "configurations":[]}]

但是我在 jupyter notebook 的 pyspark 代码中尝试使用 graphframes 包时仍然出现错误。

这是我的代码(来自 graphframes 示例):

# Create a Vertex DataFrame with unique ID column "id"
v = spark.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = spark.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()

这里是 output/error:

ImportError: No module named graphframes

我通读了 this git thread,但所有潜在的解决方法似乎都非常复杂,需要通过 ssh 连接到 EMR 集群的主节点。

我终于想通了有一个PyPi package for graphframes. I used this to create a bootstrapping action as detailed here,虽然我改变了一些东西。

以下是我为使 graphframes 在 EMR 上运行所做的工作:

  1. 首先我创建了一个 shell 脚本并将其保存为 s3 命​​名为 "install_jupyter_libraries_emr.sh":
#!/bin/bash

sudo pip install graphframes
  1. 然后我在 AWS 控制台中完成了高级选项 EMR 创建过程。
    • 在第1步中,我在编辑软件设置文本框中添加了graphframes包的maven坐标:
    [{"classification":"spark-defaults","properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}}]
    
    • 在第 3 步:常规集群设置中,我进入了 bootstrap 操作部分
    • 在 bootstrap 动作部分,我添加了一个新的自定义 boostrap 动作:
      • 任意名称
      • 我的 "install_jupyter_libraries_emr.sh" 脚本的 s3 位置
      • 没有可选参数
    • 然后我开始创建集群
  2. 集群启动后,我进入 Jupyter 并 运行 我的代码:
# Create a Vertex DataFrame with unique ID column "id"
v = spark.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = spark.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()

这一次,我终于得到了正确的输出:

+---+--------+
| id|inDegree|
+---+--------+
|  c|       1|
|  b|       2|
+---+--------+

+---+------------------+
| id|          pagerank|
+---+------------------+
|  b|1.0905890109440908|
|  a|              0.01|
|  c|1.8994109890559092|
+---+------------------+

@Bob Swain 的回答很好,但现在图框的存储库位于 https://repos.spark-packages.org/。因此,为了使其正常工作,分类应更改为:

[
 {
  "classification":"spark-defaults",
  "properties":{
    "spark.jars.packages":"graphframes:graphframes:0.8.0-spark2.4-s_2.11",
    "spark.jars.repositories":"https://repos.spark-packages.org/"
  }
 }
]