AWS EMR PySpark 连接到 mysql
AWS EMR PySpark connect to mysql
我正在尝试使用 jdbc 通过 pyspark 连接到 mysql。我能够在 EMR 之外做到这一点。但是当我尝试使用 EMR 时,pyspark 无法正确启动。
我在我的机器上使用的命令
pyspark --conf spark.executor.extraClassPath=/home/hadoop/mysql-connector-java-5.1.38-bin.jar --driver-class-path /home/hadoop/mysql-connector-java-5.1.38-bin.jar --jars /home/hadoop/mysql-connector-java-5.1.38-bin.jar
并得到以下输出:
16/05/18 14:29:21 INFO Client: Application report for application_1463578502297_0011 (state: FAILED)
16/05/18 14:29:21 INFO Client:
client token: N/A
diagnostics: Application application_1463578502297_0011 failed 2 times due to AM Container for appattempt_1463578502297_0011_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1463578502297_0011_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1463581754050
final status: FAILED
tracking URL: http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011
user: hadoop
16/05/18 14:29:21 INFO Client: Deleting staging directory .sparkStaging/application_1463578502297_0011
16/05/18 14:29:21 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
我也试过不使用额外的 jar,但连接 mariadb.jdbc 我读过的是默认驱动程序:
from pyspark.sql import SQLContext
sqlctx = SQLContext(sc)
df = sqlctx.read.format("jdbc").option("url", "jdbc:mysql://ip:port/db").option("driver", "com.mariadb.jdbc.Driver").option("dbtable", "...").option("user", "....").option("password", "...").load()
但我明白了
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 139, in load
return self._df(self._jreader.load())
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: com.mariadb.jdbc.Driver
at java.net.URLClassLoader.run(URLClassLoader.java:366)
at java.net.URLClassLoader.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory.apply(JdbcUtils.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory.apply(JdbcUtils.scala:45)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
应该怎么做?
谢谢,
佩德罗·罗萨内斯。
如果您想 运行 Amazon EMR 3.x 或 EMR 4.x 上的任何 Spark 作业,您需要执行以下操作:
1) 您可以在引导时提及 spark-defaults.conf 属性,即您可以更改 Driver Classpath 的配置和 Executor Classpath 属性 以及 maximizeResourceAllocation(如果需要,请在评论中询问更多信息。)docs
2) 您需要下载所有必需的 jar,即(mysql-connector.jar 和 mariadb-connector.jar)在您的情况下是 MariaDB 和 MySQL 连接器 JDBC jar 到所有节点上的所有类路径位置,如 Spark、Yarn 和 Hadoop,它是 MASTER、CORE 或 TASK(Spark On Yarn 场景涵盖最多)bootstrap scripts docs
3) 如果您的 Spark 作业仅从驱动程序节点到您的数据库进行通信,那么您可能只需要它使用 --jars 并且不会给您异常和工作很好。
4) 还建议您尝试将 Master 作为 yarn-cluster 而不是 local 或 yarn-client
在您的情况下,如果您使用 MariaDB 或 MySQL 将您的 jar 复制到 $SPARK_HOME/lib、$HADOOP_HOME/lib 等等,在集群的每个节点上,然后试一试。
稍后您可以在创建集群时使用Bootstrap操作在所有节点上复制您的jar。
请在下面评论以获取更多信息。
我正在尝试使用 jdbc 通过 pyspark 连接到 mysql。我能够在 EMR 之外做到这一点。但是当我尝试使用 EMR 时,pyspark 无法正确启动。
我在我的机器上使用的命令
pyspark --conf spark.executor.extraClassPath=/home/hadoop/mysql-connector-java-5.1.38-bin.jar --driver-class-path /home/hadoop/mysql-connector-java-5.1.38-bin.jar --jars /home/hadoop/mysql-connector-java-5.1.38-bin.jar
并得到以下输出:
16/05/18 14:29:21 INFO Client: Application report for application_1463578502297_0011 (state: FAILED)
16/05/18 14:29:21 INFO Client:
client token: N/A
diagnostics: Application application_1463578502297_0011 failed 2 times due to AM Container for appattempt_1463578502297_0011_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1463578502297_0011_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1463581754050
final status: FAILED
tracking URL: http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011
user: hadoop
16/05/18 14:29:21 INFO Client: Deleting staging directory .sparkStaging/application_1463578502297_0011
16/05/18 14:29:21 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
我也试过不使用额外的 jar,但连接 mariadb.jdbc 我读过的是默认驱动程序:
from pyspark.sql import SQLContext
sqlctx = SQLContext(sc)
df = sqlctx.read.format("jdbc").option("url", "jdbc:mysql://ip:port/db").option("driver", "com.mariadb.jdbc.Driver").option("dbtable", "...").option("user", "....").option("password", "...").load()
但我明白了
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 139, in load
return self._df(self._jreader.load())
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: com.mariadb.jdbc.Driver
at java.net.URLClassLoader.run(URLClassLoader.java:366)
at java.net.URLClassLoader.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory.apply(JdbcUtils.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory.apply(JdbcUtils.scala:45)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
应该怎么做?
谢谢, 佩德罗·罗萨内斯。
如果您想 运行 Amazon EMR 3.x 或 EMR 4.x 上的任何 Spark 作业,您需要执行以下操作:
1) 您可以在引导时提及 spark-defaults.conf 属性,即您可以更改 Driver Classpath 的配置和 Executor Classpath 属性 以及 maximizeResourceAllocation(如果需要,请在评论中询问更多信息。)docs
2) 您需要下载所有必需的 jar,即(mysql-connector.jar 和 mariadb-connector.jar)在您的情况下是 MariaDB 和 MySQL 连接器 JDBC jar 到所有节点上的所有类路径位置,如 Spark、Yarn 和 Hadoop,它是 MASTER、CORE 或 TASK(Spark On Yarn 场景涵盖最多)bootstrap scripts docs
3) 如果您的 Spark 作业仅从驱动程序节点到您的数据库进行通信,那么您可能只需要它使用 --jars 并且不会给您异常和工作很好。
4) 还建议您尝试将 Master 作为 yarn-cluster 而不是 local 或 yarn-client
在您的情况下,如果您使用 MariaDB 或 MySQL 将您的 jar 复制到 $SPARK_HOME/lib、$HADOOP_HOME/lib 等等,在集群的每个节点上,然后试一试。
稍后您可以在创建集群时使用Bootstrap操作在所有节点上复制您的jar。
请在下面评论以获取更多信息。