Zeppelin:Scala Dataframe python
Zeppelin: Scala Dataframe to python
如果我有一个带有 DataFrame 的 Scala 段落,我可以与 python 共享和使用它吗? (据我了解,pyspark 使用 py4j)
我试过这个:
Scala 段落:
x.printSchema
z.put("xtable", x )
Python 段落:
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
the_data = z.get("xtable")
print the_data
sns.set()
g = sns.PairGrid(data=the_data,
x_vars=dependent_var,
y_vars=sensor_measure_columns_names + operational_settings_columns_names,
hue="UnitNumber", size=3, aspect=2.5)
g = g.map(plt.plot, alpha=0.5)
g = g.set(xlim=(300,0))
g = g.add_legend()
错误:
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark.py", line 222, in <module>
eval(compiledCode)
File "<string>", line 15, in <module>
File "/usr/local/lib/python2.7/dist-packages/seaborn/axisgrid.py", line 1223, in __init__
hue_names = utils.categorical_order(data[hue], hue_order)
TypeError: 'JavaObject' object has no attribute '__getitem__'
解决方案:
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import StringIO
def show(p):
img = StringIO.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print "%html <div style='width:600px'>" + img.buf + "</div>"
df = sqlContext.table("fd").select()
df.printSchema
pdf = df.toPandas()
g = sns.pairplot(data=pdf,
x_vars=["setting1","setting2"],
y_vars=["s4", "s3",
"s9", "s8",
"s13", "s6"],
hue="id", aspect=2)
show(g)
您可以在 Scala 中将 DataFrame
注册为临时 table:
// registerTempTable in Spark 1.x
df.createTempView("df")
并在 Python 中阅读 SQLContext.table
:
df = sqlContext.table("df")
如果你真的想使用 put
/ get
你会从头开始构建 Python DataFrame
:
z.put("df", df: org.apache.spark.sql.DataFrame)
from pyspark.sql import DataFrame
df = DataFrame(z.get("df"), sqlContext)
要使用 matplotlib
绘图,您需要使用 collect
或 toPandas
:[=25= 将 DataFrame
转换为本地 Python 对象]
pdf = df.toPandas()
请注意,它将向驱动程序获取数据。
另见
如果我有一个带有 DataFrame 的 Scala 段落,我可以与 python 共享和使用它吗? (据我了解,pyspark 使用 py4j)
我试过这个:
Scala 段落:
x.printSchema
z.put("xtable", x )
Python 段落:
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
the_data = z.get("xtable")
print the_data
sns.set()
g = sns.PairGrid(data=the_data,
x_vars=dependent_var,
y_vars=sensor_measure_columns_names + operational_settings_columns_names,
hue="UnitNumber", size=3, aspect=2.5)
g = g.map(plt.plot, alpha=0.5)
g = g.set(xlim=(300,0))
g = g.add_legend()
错误:
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark.py", line 222, in <module>
eval(compiledCode)
File "<string>", line 15, in <module>
File "/usr/local/lib/python2.7/dist-packages/seaborn/axisgrid.py", line 1223, in __init__
hue_names = utils.categorical_order(data[hue], hue_order)
TypeError: 'JavaObject' object has no attribute '__getitem__'
解决方案:
%pyspark
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import StringIO
def show(p):
img = StringIO.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print "%html <div style='width:600px'>" + img.buf + "</div>"
df = sqlContext.table("fd").select()
df.printSchema
pdf = df.toPandas()
g = sns.pairplot(data=pdf,
x_vars=["setting1","setting2"],
y_vars=["s4", "s3",
"s9", "s8",
"s13", "s6"],
hue="id", aspect=2)
show(g)
您可以在 Scala 中将 DataFrame
注册为临时 table:
// registerTempTable in Spark 1.x
df.createTempView("df")
并在 Python 中阅读 SQLContext.table
:
df = sqlContext.table("df")
如果你真的想使用 put
/ get
你会从头开始构建 Python DataFrame
:
z.put("df", df: org.apache.spark.sql.DataFrame)
from pyspark.sql import DataFrame
df = DataFrame(z.get("df"), sqlContext)
要使用 matplotlib
绘图,您需要使用 collect
或 toPandas
:[=25= 将 DataFrame
转换为本地 Python 对象]
pdf = df.toPandas()
请注意,它将向驱动程序获取数据。
另见