如何解决pyspark三角洲湖优化-无法解析SQL
我有一个使用spark 3.x和delta 0.7.x创建的增量表:
data = spark.range(0,5)
data.write.format("delta").mode("overwrite").save("tmp/delta-table")
# add some more files
data = spark.range(20,100)
data.write.format("delta").mode("append").save("tmp/delta-table")
df = spark.read.format("delta").load("tmp/delta-table")
df.show()
现在,在日志中生成了很多文件(很多情况下,木地板文件太小)。
%ls tmp/delta-table
我要压缩它们:
df.createGlobalTempView("my_delta_table")
spark.sql("OPTIMIZE my_delta_table ZORDER BY (id)")
失败:
ParseException:
mismatched input 'OPTIMIZE' expecting {'(','ADD','ALTER','ANALYZE','CACHE','CLEAR','COMMENT','COMMIT','CREATE','DELETE','DESC','DESCRIBE','DFS','DROP','EXPLAIN','EXPORT','FROM','GRANT','IMPORT','INSERT','LIST','LOAD','LOCK','MAP','MERGE','MSCK','REDUCE','REFRESH','REPLACE','RESET','REVOKE','ROLLBACK','SELECT','SET','SHOW','START','TABLE','TruncATE','UNCACHE','UNLOCK','UPDATE','USE','VALUES','WITH'}(line 1,pos 0)
== sql ==
OPTIMIZE my_delta_table ZORDER BY (id)
^^^
问题:
注意:
spark is started like this:
import pyspark
from pyspark.sql import SparkSession
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages","io.delta:delta-core_2.12:0.7.0") \
.config("spark.sql.extensions","io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog","org.apache.spark.sql.delta.catalog.DeltaCatalog") \
.getorCreate()
from delta.tables import *
解决方法
public static void Main(string[] args)
{
// Populate the grades you want to test here
int[] studentsGrades = {9,10,8,6,9,10};
// Student names are automatically generated
string[] studentsList = Enumerable.Range(1,studentsGrades.Length)
.Select(i => $"Student #{i} grade: {studentsGrades[i - 1]}\tcoins: ")
.ToArray();
// Call the new method to get the coins array
int[] coins = GetCoins(studentsGrades);
// Show the results
PrintResult(studentsList,coins);
Console.WriteLine("Done! Press any key to exit...");
Console.ReadKey();
}
在OSS Delta Lake中不可用。如果要压缩文件,可以按照Compact files部分中的说明进行操作。如果您想使用OPTIMIZE
,当前需要使用Databricks Runtime。
如果您在本地运行Delta,则意味着您必须使用OSS Delta Lake。最优化命令仅适用于Databricks Delta Lake。要在OSS中进行文件压缩,您可以执行以下操作-https://docs.delta.io/latest/best-practices.html#compact-files
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。