WSO2 DAS Spark脚本执行失败
WSO2 DAS Spark script fails to execute
以下是我要执行的 spark 脚本。在 DAS(3.0.1) 批量分析控制台上 运行 成功。但在批量分析中保存为脚本时无法执行。
insert overwrite table CLASS_COUNT select ((timestamp / 120000) * 120000) as time , vin , username , classType,
sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCount
from ACCELE_COUNTS
group by ((timestamp / 120000) * 120000) ,classType, vin, username;
错误:
ERROR: [1.199] failure: ``limit'' expected but identifier ACCELE_COUNTSgroup found insert overwrite table X1234_CLASS_COUNT select ((timestamp / 120000) * 120000) as time , vin , username , classType, sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCountfrom ACCELE_COUNTSgroup by ((timestamp / 120000) * 120000) ,classType, vin, username ^
在此之前,我正在执行以下操作,没有任何问题。
CREATE TEMPORARY TABLE ACCELE_COUNTS
USING CarbonAnalytics
OPTIONS (tableName "KAMPANA_RECKLESS_COUNT_STREAM",
schema "timestamp LONG , vin STRING, username STRING, classType STRING, acceleCount INT,decceleCount INT");
CREATE TEMPORARY TABLE CLASS_COUNT
USING org.wso2.carbon.analytics.spark.event.EventStreamProvider
OPTIONS (receiverURL "tcp://localhost:7611",
username "admin",
password "admin",
streamName "DAS_RECKELSS_COUNT_STREAM",
version "1.0.0",
description "Events are published when product quantity goes beyond a certain level",
nickName "product alerts",
payload "time LONG,vin STRING,username STRING, classType STRING, acceleCount INT, decceleCount INT"
);
发生这种情况是因为
之间没有空格
1) decceleCount
和 from
2) ACCELE_COUNTS
和 group by
因此,即使第二个单词换行,也要确保单词之间有空格。
以下是我要执行的 spark 脚本。在 DAS(3.0.1) 批量分析控制台上 运行 成功。但在批量分析中保存为脚本时无法执行。
insert overwrite table CLASS_COUNT select ((timestamp / 120000) * 120000) as time , vin , username , classType,
sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCount
from ACCELE_COUNTS
group by ((timestamp / 120000) * 120000) ,classType, vin, username;
错误:
ERROR: [1.199] failure: ``limit'' expected but identifier ACCELE_COUNTSgroup found insert overwrite table X1234_CLASS_COUNT select ((timestamp / 120000) * 120000) as time , vin , username , classType, sum(acceleCount) as acceleCount , sum(decceleCount) as decceleCountfrom ACCELE_COUNTSgroup by ((timestamp / 120000) * 120000) ,classType, vin, username ^
在此之前,我正在执行以下操作,没有任何问题。
CREATE TEMPORARY TABLE ACCELE_COUNTS
USING CarbonAnalytics
OPTIONS (tableName "KAMPANA_RECKLESS_COUNT_STREAM",
schema "timestamp LONG , vin STRING, username STRING, classType STRING, acceleCount INT,decceleCount INT");
CREATE TEMPORARY TABLE CLASS_COUNT
USING org.wso2.carbon.analytics.spark.event.EventStreamProvider
OPTIONS (receiverURL "tcp://localhost:7611",
username "admin",
password "admin",
streamName "DAS_RECKELSS_COUNT_STREAM",
version "1.0.0",
description "Events are published when product quantity goes beyond a certain level",
nickName "product alerts",
payload "time LONG,vin STRING,username STRING, classType STRING, acceleCount INT, decceleCount INT"
);
发生这种情况是因为
之间没有空格1) decceleCount
和 from
2) ACCELE_COUNTS
和 group by
因此,即使第二个单词换行,也要确保单词之间有空格。