如何解决如何让 Spark DataFrameReader jdbc 接受自定义类型的 Postgres 数组
使用 DataFrameReader Unsupported type ARRAY
函数读取表时得到 jdbc()
:
java.sql.sqlException: Unsupported type ARRAY
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getCatalystType(JdbcUtils.scala:251)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
at scala.Option.getorElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.apply(JDBCRelation.scala:225)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:313)
表格是这样的:
mydb=> \d check_type;
Table "public.check_type"
Column | Type | Collation | Nullable | Default
--------+-----------+-----------+----------+---------
id | integer | | |
types | my_type[] | | |
mydb=> \d my_type;
Composite type "public.my_type"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | |
有解决方法吗?或者这是预期的行为?
Spark 版本:
Spark-core = 2.4.0;
Spark-sql = 2.4.0;
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。