微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何在AWS Glue中使用雪花JDBC连接驱动程序运行pySpark

如何解决如何在AWS Glue中使用雪花JDBC连接驱动程序运行pySpark

I am trying to run the below code in AWS glue:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import glueContext
from awsglue.job import Job
from py4j.java_gateway import java_import
SNowFLAKE_SOURCE_NAME = "net.sNowflake.spark.sNowflake"

## @params: [JOB_NAME,URL,ACCOUNT,WAREHOUSE,DB,SCHEMA,USERNAME,PASSWORD]
args = getResolvedOptions(sys.argv,['JOB_NAME','URL','ACCOUNT','WAREHOUSE','DB','SCHEMA','USERNAME','PASSWORD'])
sc = SparkContext()
glueContext = glueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'],args)
java_import(spark._jvm,"net.sNowflake.spark.sNowflake")

## uj = sc._jvm.net.sNowflake.spark.sNowflake
spark._jvm.net.sNowflake.spark.sNowflake.SNowflakeConnectorUtils.enablePushdownSession(spark._jvm.org.apache.spark.sql.SparkSession.builder().getorCreate())

options = {
"sfURL" : args['URL'],"sfAccount" : args['ACCOUNT'],"sfUser" : args['USERNAME'],"sfPassword" : args['PASSWORD'],"sfDatabase" : args['DB'],"sfSchema" : args['SCHEMA'],"sfWarehouse" : args['WAREHOUSE'],}

df = spark.read \
  .format("sNowflake") \
  .options(**options) \
  .option("dbtable","STORE") \
  .load()

display(df)

## Perform any kind of transformations on your data and save as a new Data Frame: “df1”
##df1 = [Insert any filter,transformation,etc]

## Write the Data Frame contents back to SNowflake in a new table
##df1.write.format(SNowFLAKE_SOURCE_NAME).options(**sfOptions).option("dbtable","[new_table_name]").mode("overwrite").save()
job.commit()

并出现错误

Traceback (most recent call last): File "/tmp/spark_sNowflake",line 35,in <module> 
.option("dbtable","STORE") \ File 
"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py",line 172,in load return 
self._df(self._jreader.load()) File "/opt/amazon/spark/python/lib/py4j-0.10.7- 

src.zip/py4j/java_gateway.py”,第1257行,在致电答案中,self.gateway_client,self.target_id, self.name)文件“ /opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py”,第63行,在deco中 返回f(* a,** kw)文件“ /opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py”,行 328,以get_return_value格式(target_id,“。”,名称),值)py4j.protocol.Py4JJavaError:错误 发生在调用o78.load时。 :java.lang.classNotFoundException:找不到数据源: 雪花。请在http://spark.apache.org/third-party-projects.html处找到软件包 org.apache.spark.sql.execution.datasources.DataSource $ .lookupDataSource(DataSource.scala:657)在 org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)在 org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)在 sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)位于 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:498)在 py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)在 py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)在 py4j.Gateway.invoke(Gateway.java:282)在 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)在 py4j.commands.CallCommand.execute(CallCommand.java:79)在 py4j.GatewayConnection.run(GatewayConnection.java:238)在java.lang.Thread.run(Thread.java:748) 造成原因:java.lang.classNotFoundException:雪花。DefaultSource位于 java.net.urlclassloader.findClass(urlclassloader.java:382)在 java.lang.classLoader.loadClass(ClassLoader.java:418)在 sun.misc.Launcher $ AppClassLoader.loadClass(Launcher.java:352)在 java.lang.classLoader.loadClass(ClassLoader.java:351)在

org.apache.spark.sql.execution.datasources.DataSource $$ anonfun $ 20 $$ anonfun $ apply $ 12.apply(DataSource.scal a:634)在

解决方法

错误消息显示“ java.lang.ClassNotFoundException:无法找到数据源:雪花”。创建工作时,您是否使用了适当的罐子并将其传递给Glue?这里有一些例子

Running custom Java class in PySpark

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。