MemSQL 中的列名字符限制

Column Name Character Limit in MemSQL

我有一个到 MemSQL 的数据加载作业(spark 连接器),由于列名的长度超过了允许的限制,它失败了。有没有办法来解决这个问题?我无法更改列名,因为它们是以编程方式生成的,我无法控制它。

错误信息:

Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Identifier name '10000_BREAKING_BAD_IS_WAY_BETTER_THAN_THE_GAME_OF_THRONES_10000_LOWER_TOLERANCE' is too long
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.mysql.jdbc.Util.handleNewInstance(Util.java:377)
    at com.mysql.jdbc.Util.getInstance(Util.java:360)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:978)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2526)
    at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1618)
    at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1549)
    at com.memsql.spark.connector.DataFrameFunctions.createMemSQLTableFromSchema(DataFrameFunctions.scala:169)
    at com.memsql.spark.connector.DataFrameFunctions.createMemSQLTableAs(DataFrameFunctions.scala:104)
    at com.rb.pal.dm.MemSQLWriter$.main(MemSQLWriter.scala:65)
    at com.rb.pal.dm.MemSQLWriter.main(MemSQLWriter.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:170)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

是否有可用于允许更多字符用于列名称的配置设置?

我正在将数据直接从 parquet 文件加载到 MemSQL table

df.createMemSQLTableAs(dbName, tableName, dbHost, dbPort, user, password, useKeylessShardedOptimization = true)

很遗憾,没有。截至本回答时,没有针对此问题的可用配置。

如果可能,请尝试在查询生成器级别配置长度。