获取 Apache Spark 中整个数据集或仅列的摘要 Java
Getting the Summary of Whole Dataset or Only Columns in Apache Spark Java
对于下面的数据集,为了获得 Col1 的 Total Summary 值,我做了
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
然后与
合并
df.union(totaldf).orderBy(col("Col1"), col("Col2").desc).show(false)
df.
+-----------+-------+--------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+--------+--------------+
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | item1 | 15 | 14 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 18 | 16 |
+-----------+-------+--------+--------------+
合并后
+-----------+-------+-------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+-------+--------------+
| Category1 | Total | 44 | 40 |
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | Total | 46 | 44 |
| Category2 | item1 | 16 | 15 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 19 | 17 |
+-----------+-------+-------+--------------+
现在我想要如下所示的整个数据集的摘要,其中 Col1 摘要作为 Total,并且包含所有 Col1 和 Col2 的数据。
必填。
+-----------+-------+-------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+-------+--------------+
| Total | Total | 90 | 84 |
| Category1 | Total | 44 | 40 |
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | Total | 46 | 44 |
| Category2 | item1 | 16 | 15 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 19 | 17 |
+-----------+-------+-------+--------------+
我怎样才能达到上面的结果?
从 totaldf
创建 第三个数据帧 作为
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
然后将其用于 union
作为
df.union(totaldf).union(finalTotalDF).orderBy(col("Col1"), col("Col2").desc).show(false)
你应该期末考试dataframe
已更新
如果订购对您很重要,那么您应该通过执行以下操作将 Col2
列中 Total
的 T
更改为 t
作为 total
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
df.union(totaldf).union(finalTotalDF).orderBy(col("Col1").desc, col("Col2").desc).show(false)
你应该得到
+---------+-----+-----+------------+
|Col1 |Col2 |price|displayPrice|
+---------+-----+-----+------------+
|Total |total|90 |82 |
|Category2|total|46 |42 |
|Category2|item3|19 |17 |
|Category2|item2|11 |10 |
|Category2|item1|16 |15 |
|Category1|total|44 |40 |
|Category1|item3|18 |16 |
|Category1|item2|11 |10 |
|Category1|item1|15 |14 |
+---------+-----+-----+------------+
如果评论中提到的订购对您来说真的很重要
I want the total Data as prioirity,So I want that to be at the Top, which is actuall the requirement for me
然后您可以创建另一列用于排序
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"), lit(1).as("sort"))
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"), lit(0).as("sort"))
finalTotalDF.union(totaldf).union(df.withColumn("sort", lit(2))).orderBy(col("sort"), col("Col1"), col("Col2")).drop("sort").show(false)
你应该得到
+---------+-----+-----+------------+
|Col1 |Col2 |price|displayPrice|
+---------+-----+-----+------------+
|Total |Total|90 |82 |
|Category1|Total|44 |40 |
|Category2|Total|46 |42 |
|Category1|item1|15 |14 |
|Category1|item2|11 |10 |
|Category1|item3|18 |16 |
|Category2|item1|16 |15 |
|Category2|item2|11 |10 |
|Category2|item3|19 |17 |
+---------+-----+-----+------------+
对于下面的数据集,为了获得 Col1 的 Total Summary 值,我做了
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
然后与
合并df.union(totaldf).orderBy(col("Col1"), col("Col2").desc).show(false)
df.
+-----------+-------+--------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+--------+--------------+
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | item1 | 15 | 14 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 18 | 16 |
+-----------+-------+--------+--------------+
合并后
+-----------+-------+-------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+-------+--------------+
| Category1 | Total | 44 | 40 |
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | Total | 46 | 44 |
| Category2 | item1 | 16 | 15 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 19 | 17 |
+-----------+-------+-------+--------------+
现在我想要如下所示的整个数据集的摘要,其中 Col1 摘要作为 Total,并且包含所有 Col1 和 Col2 的数据。 必填。
+-----------+-------+-------+--------------+
| Col1 | Col2 | price | displayPrice |
+-----------+-------+-------+--------------+
| Total | Total | 90 | 84 |
| Category1 | Total | 44 | 40 |
| Category1 | item1 | 15 | 14 |
| Category1 | item2 | 11 | 10 |
| Category1 | item3 | 18 | 16 |
| Category2 | Total | 46 | 44 |
| Category2 | item1 | 16 | 15 |
| Category2 | item2 | 11 | 10 |
| Category2 | item3 | 19 | 17 |
+-----------+-------+-------+--------------+
我怎样才能达到上面的结果?
从 totaldf
创建 第三个数据帧 作为
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
然后将其用于 union
作为
df.union(totaldf).union(finalTotalDF).orderBy(col("Col1"), col("Col2").desc).show(false)
你应该期末考试dataframe
已更新
如果订购对您很重要,那么您应该通过执行以下操作将 Col2
列中 Total
的 T
更改为 t
作为 total
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"))
df.union(totaldf).union(finalTotalDF).orderBy(col("Col1").desc, col("Col2").desc).show(false)
你应该得到
+---------+-----+-----+------------+
|Col1 |Col2 |price|displayPrice|
+---------+-----+-----+------------+
|Total |total|90 |82 |
|Category2|total|46 |42 |
|Category2|item3|19 |17 |
|Category2|item2|11 |10 |
|Category2|item1|16 |15 |
|Category1|total|44 |40 |
|Category1|item3|18 |16 |
|Category1|item2|11 |10 |
|Category1|item1|15 |14 |
+---------+-----+-----+------------+
如果评论中提到的订购对您来说真的很重要
I want the total Data as prioirity,So I want that to be at the Top, which is actuall the requirement for me
然后您可以创建另一列用于排序
import org.apache.spark.sql.functions._
val totaldf = df.groupBy("Col1").agg(lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"), lit(1).as("sort"))
val finalTotalDF= totaldf.select(lit("Total").as("Col1"), lit("Total").as("Col2"), sum("price").as("price"), sum("displayPrice").as("displayPrice"), lit(0).as("sort"))
finalTotalDF.union(totaldf).union(df.withColumn("sort", lit(2))).orderBy(col("sort"), col("Col1"), col("Col2")).drop("sort").show(false)
你应该得到
+---------+-----+-----+------------+
|Col1 |Col2 |price|displayPrice|
+---------+-----+-----+------------+
|Total |Total|90 |82 |
|Category1|Total|44 |40 |
|Category2|Total|46 |42 |
|Category1|item1|15 |14 |
|Category1|item2|11 |10 |
|Category1|item3|18 |16 |
|Category2|item1|16 |15 |
|Category2|item2|11 |10 |
|Category2|item3|19 |17 |
+---------+-----+-----+------------+