WSO2 DSS 不支持 Postgres?
WSO2 DAS does not suport Postgres?
我正在使用 API 管理器 1.10.0 和 DAS 3.0.1。
我正在尝试为 DAS 安装 Postgres。没有postpresql.sql
,所以我用了oracle.sql
。
但是我遇到了异常。
[2016-08-11 15:06:25,079] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
java.lang.RuntimeException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:194)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:88)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:88)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:731)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$$anonfun.apply(carbon.scala:55)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$$anonfun.apply(carbon.scala:42)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString.apply(carbon.scala:41)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString.apply(carbon.scala:38)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$.schemaString(carbon.scala:38)
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:180)
... 26 more
为 API_REQUEST_SUMMARY 脚本创建 table 是:
CREATE TABLE API_REQUEST_SUMMARY (
api character varying(100)
, api_version character varying(100)
, version character varying(100)
, apiPublisher character varying(100)
, consumerKey character varying(100)
, userId character varying(100)
, context character varying(100)
, max_request_time decimal(30)
, total_request_count integer
, hostName character varying(100)
, year SMALLINT
, month SMALLINT
, day SMALLINT
, time character varying(30)
, PRIMARY KEY(api,api_version,apiPublisher,consumerKey,userId,context,hostName,time)
);
如何使用 Postgres 进行这项工作?
我必须将列 max_request_time
定义为 bigint
我正在使用 API 管理器 1.10.0 和 DAS 3.0.1。
我正在尝试为 DAS 安装 Postgres。没有postpresql.sql
,所以我用了oracle.sql
。
但是我遇到了异常。
[2016-08-11 15:06:25,079] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
java.lang.RuntimeException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:194)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:88)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:88)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:731)
at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Don't know how to save StructField(max_request_time,DecimalType(30,0),true) to JDBC
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$$anonfun.apply(carbon.scala:55)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString$$anonfun.apply(carbon.scala:42)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString.apply(carbon.scala:41)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$$anonfun$schemaString.apply(carbon.scala:38)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.sql.jdbc.carbon.package$JDBCWriteDetails$.schemaString(carbon.scala:38)
at org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:180)
... 26 more
为 API_REQUEST_SUMMARY 脚本创建 table 是:
CREATE TABLE API_REQUEST_SUMMARY (
api character varying(100)
, api_version character varying(100)
, version character varying(100)
, apiPublisher character varying(100)
, consumerKey character varying(100)
, userId character varying(100)
, context character varying(100)
, max_request_time decimal(30)
, total_request_count integer
, hostName character varying(100)
, year SMALLINT
, month SMALLINT
, day SMALLINT
, time character varying(30)
, PRIMARY KEY(api,api_version,apiPublisher,consumerKey,userId,context,hostName,time)
);
如何使用 Postgres 进行这项工作?
我必须将列 max_request_time
定义为 bigint