如何解决在将推文数据插入 Apache Cassandra 数据库时遇到的错误?

如何解决如何解决在将推文数据插入 Apache Cassandra 数据库时遇到的错误?

我正在尝试使用 tweepy 库接收推文对象,使用 Apache Kafka 和 Apache Spark 我正在尝试将 json 推文对象流式传输和转换为结构化格式,而不是将其插入到 cassandra db。

我的数据管道如下;

enter image description here

我有 2 个 .py 文件

  1. kafka_tweet_producer.py

用于接收由所需主题标签过滤的推文对象,并与 Kafka 进行流式传输。

  1. twitter_structured_stream_spark_kafka_cassandra.py

编写用于创建 spark session 以从 kafka 读取,将 json 转换为结构化格式,最后将这些数据插入 Cassandra db。

我用自己的名字过滤了推文。然后我用我的电脑写了几条推文。我看到下面的结果;

enter image description here

然后我通过手机又发送了 1 条推文,spark 会话终止,并显示以下错误消息;

  21/03/02 22:55:00 ERROR QueryExecutor: Failed to execute: com.datastax.spark.connector.writer.RichBoundStatementWrapper@40407423
com.datastax.oss.driver.api.core.servererrors.InvalidQueryException: Invalid unset value for column id
21/03/02 22:55:00 ERROR Utils: Aborting task
java.io.IOException: Failed to write statements to tweet_db.tweet2. The
latest exception was
  Invalid unset value for column id

Please check the executor logs for more exceptions and information

        at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
        at scala.Option.map(Option.scala:230)
        at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
        at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.commit(CasssandraDriverDataWriterFactory.scala:46)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:450)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
21/03/02 22:55:00 ERROR DataWritingSparkTask: Aborting commit for partition 0 (task 5,attempt 0,stage 5.0)
21/03/02 22:55:00 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 5)
java.io.IOException: Failed to write statements to tweet_db.tweet2. The
latest exception was
  Invalid unset value for column id

Please check the executor logs for more exceptions and information

        at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
        at scala.Option.map(Option.scala:230)
        at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
        at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.commit(CasssandraDriverDataWriterFactory.scala:46)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:450)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
        at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
        Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
latest exception was
  Invalid unset value for column id

Please check the executor logs for more exceptions and information

                at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                at scala.Option.map(Option.scala:230)
                at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.abort(CasssandraDriverDataWriterFactory.scala:51)
                at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$12(WriteToDataSourceV2Exec.scala:473)
                at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1422)
                ... 10 more
        Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
latest exception was
  Invalid unset value for column id

Please check the executor logs for more exceptions and information

                at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                at scala.Option.map(Option.scala:230)
                at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.close(CasssandraDriverDataWriterFactory.scala:56)
                at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$15(WriteToDataSourceV2Exec.scala:477)
                at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1433)
                ... 10 more
21/03/02 22:55:00 ERROR TaskSetManager: Task 0 in stage 5.0 failed 1 times; aborting job
21/03/02 22:55:00 ERROR AppendDataExec: Data source write support CassandraBulkWrite(org.apache.spark.sql.SparkSession@4d981ecd,com.datastax.spark.connector.cql.CassandraConnector@5de27a36,TableDef(tweet_db,tweet2,ArrayBuffer(ColumnDef(id,PartitionKeyColumn,VarCharType)),ArrayBuffer(),Stream(ColumnDef(created_at,RegularColumn,TimestampType),ColumnDef(id_str,VarCharType),ColumnDef(location,ColumnDef(name,ColumnDef(text,Stream(),false,Map()),WriteConf(BytesInBatch(1024),1000,Partition,LOCAL_QUORUM,true,5,None,TTLOption(DefaultValue),TimestampOption(DefaultValue),None),StructType(StructField(created_at,TimestampType,true),StructField(id_str,IntegerType,StructField(text,StringType,StructField(id,StructField(name,StructField(location,true)),org.apache.spark.SparkConf@3b0b6411) is aborting.
21/03/02 22:55:00 ERROR AppendDataExec: Data source write support CassandraBulkWrite(org.apache.spark.sql.SparkSession@4d981ecd,org.apache.spark.SparkConf@3b0b6411) aborted.
21/03/02 22:55:00 ERROR MicroBatchExecution: Query [id = 774bee8a-e324-4dc4-acc7-d010fd35e74c,runId = 4c4b6718-0235-49f9-b6ee-afea8846e99b] terminated with error
py4j.Py4JException: An exception was raised by the Python Proxy. Return Message: Traceback (most recent call last):
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py",line 2442,in _call_proxy
    return_value = getattr(self.pool[obj_id],method)(*params)
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",line 207,in call
    raise e
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",line 204,in call
    self.func(DataFrame(jdf,self.sql_ctx),batch_id)
  File "C:/Users/PC/Documents/Jupyter/Big_Data/Application_01_Kafka-twitter-spark-streaming/twitter_structured_stream_spark_kafka_cassandra.py",line 6,in write_to_cassandra
    target_df.write \
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\readwriter.py",line 825,in save
    self._jwrite.save()
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py",line 1304,in __call__
    return_value = get_return_value(
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",line 128,in deco
    return f(*a,**kw)
  File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py",line 326,in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o107.save.
: org.apache.spark.SparkException: Writing job aborted.
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:413)
        at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:361)
        at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:253)
        at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:259)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
        at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.doExecute(V2CommandExec.scala:54)
        at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
        at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121)
        at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:963)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:963)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times,most recent failure: Lost task 0.0 in stage 5.0 (TID 5,host.docker.internal,executor driver): java.io.IOException: Failed to write statements to tweet_db.tweet2. The
latest exception was
  Invalid unset value for column id

   Please check the executor logs for more exceptions and information
    
            at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
            at scala.Option.map(Option.scala:230)
            at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
            at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.commit(CasssandraDriverDataWriterFactory.scala:46)
            at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:450)
            at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
            at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
            at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
            at org.apache.spark.scheduler.Task.run(Task.scala:127)
            at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
            at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Unknown Source)
            Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
    latest exception was
      Invalid unset value for column id
    
    Please check the executor logs for more exceptions and information
    
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                    at scala.Option.map(Option.scala:230)
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                    at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.abort(CasssandraDriverDataWriterFactory.scala:51)
                    at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$12(WriteToDataSourceV2Exec.scala:473)
                    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1422)
                    ... 10 more
            Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
    latest exception was
      Invalid unset value for column id
    
    Please check the executor logs for more exceptions and information
    
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                    at scala.Option.map(Option.scala:230)
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                    at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.close(CasssandraDriverDataWriterFactory.scala:56)
                    at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$15(WriteToDataSourceV2Exec.scala:477)
                    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1433)
                    ... 10 more
    
    Driver stacktrace:
            at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
            at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
            at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
            at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
            at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
            at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
            at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
            at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
            at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
            at scala.Option.foreach(Option.scala:407)
            at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
            at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
            at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
            at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:382)
            ... 32 more
    Caused by: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
    latest exception was
      Invalid unset value for column id
    
    Please check the executor logs for more exceptions and information
    
            at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
            at scala.Option.map(Option.scala:230)
            at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
            at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.commit(CasssandraDriverDataWriterFactory.scala:46)
            at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:450)
            at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
            at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
            at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
            at org.apache.spark.scheduler.Task.run(Task.scala:127)
            at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
            at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            ... 1 more
            Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
    latest exception was
      Invalid unset value for column id
    
    Please check the executor logs for more exceptions and information
    
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                    at scala.Option.map(Option.scala:230)
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                    at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.abort(CasssandraDriverDataWriterFactory.scala:51)
                    at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$12(WriteToDataSourceV2Exec.scala:473)
                    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1422)
                    ... 10 more
            Suppressed: java.io.IOException: Failed to write statements to tweet_db.tweet2. The
    latest exception was
      Invalid unset value for column id
    
    Please check the executor logs for more exceptions and information
    
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.$anonfun$close$2(TableWriter.scala:282)
                    at scala.Option.map(Option.scala:230)
                    at com.datastax.spark.connector.writer.AsyncStatementWriter.close(TableWriter.scala:277)
                    at com.datastax.spark.connector.datasource.CassandraDriverDataWriter.close(CasssandraDriverDataWriterFactory.scala:56)
                    at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$15(WriteToDataSourceV2Exec.scala:477)
                    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1433)
                    ... 10 more
    
    
            at py4j.Protocol.getReturnValue(Protocol.java:476)
            at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:108)
            at com.sun.proxy.$Proxy16.call(Unknown Source)
            at org.apache.spark.sql.execution.streaming.sources.PythonForeachBatchHelper$.$anonfun$callForeachBatch$1(ForeachBatchSink.scala:56)
            at org.apache.spark.sql.execution.streaming.sources.PythonForeachBatchHelper$.$anonfun$callForeachBatch$1$adapted(ForeachBatchSink.scala:56)
            at org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.addBatch(
            at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
            at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:570)
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
            at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
            at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
            at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
    Traceback (most recent call last):
      File "C:/Users/PC/Documents/Jupyter/Big_Data/Application_01_Kafka-twitter-spark-streaming/twitter_structured_stream_spark_kafka_cassandra.py",line 72,in <module>
        output_query.awaitTermination()
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\streaming.py",line 103,in awaitTermination
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py",in __call__
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",line 134,in deco
      File "<string>",line 3,in raise_from
    pyspark.sql.utils.StreamingQueryException: An exception was raised by the Python Proxy. Return Message: Traceback (most recent call last):
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py",in _call_proxy
        return_value = getattr(self.pool[obj_id],method)(*params)
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",in call
        raise e
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",in call
        self.func(DataFrame(jdf,batch_id)
      File "C:/Users/PC/Documents/Jupyter/Big_Data/Application_01_Kafka-twitter-spark-streaming/twitter_structured_stream_spark_kafka_cassandra.py",in write_to_cassandra
        target_df.write \
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\readwriter.py",in save
        self._jwrite.save()
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py",in __call__
        return_value = get_return_value(
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py",in deco
        return f(*a,**kw)
      File "C:\Spark\spark-3.0.1-bin-hadoop2.7\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py",in get_return_value
        raise Py4JJavaError(
    py4j.protocol.Py4JJavaError: An error occurred while calling o107.save.
    : org.apache.spark.SparkException: Writing job aborted.  

=== Streaming Query ===
Identifier: [id = 774bee8a-e324-4dc4-acc7-d010fd35e74c,runId = 4c4b6718-0235-49f9-b6ee-afea8846e99b]
Current Committed Offsets: {KafkaV2[Subscribe[twitter3]]: {"twitter3":{"2":1,"1":1,"0":0}}}
Current Available Offsets: {KafkaV2[Subscribe[twitter3]]: {"twitter3":{"2":1,"0":1}}}

Current State: ACTIVE
Thread State: RUNNABLE

Logical Plan:
Project [to_timestamp('created_at,Some(yyyy-MM-dd HH:mm:ss)) AS created_at#35,id_str#24,text#25,id#26,name#27,location#28]
+- Project [value#21.created_at AS created_at#23,value#21.id_str AS id_str#24,value#21.text AS text#25,value#21.user.id AS id#26,value#21.user.name AS name#27,value#21.user.location AS location#28]
   +- Project [from_json(StructField(created_at,StructField(user,StructType(StructField(id,cast(value#8 as string),Some(Asia/Istanbul)) AS value#21]
      +- StreamingDataSourceV2Relation [key#7,value#8,topic#9,partition#10,offset#11L,timestamp#12,timestampType#13],org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaScan@6859b5e7,KafkaV2[Subscribe[twitter3]]


            
        

您可以在 this 存储库中找到我的所有代码

解决方法

不确定到底发生了什么,但我敢打赌它与空值有关。通常,Cassandra 驱动程序利用“未设置”功能来确保准备好的语句中的空值不会进入数据库。毕竟在 Cassandra Traceback (most recent call last): File "C:\Users\ITPPN\Desktop\pyte.py",line 24,in <module> radiobuttons[1].click() File "C:\Python39\lib\site-packages\selenium\webdriver\remote\webelement.py",line 80,in click self._execute(Command.CLICK_ELEMENT) File "C:\Python39\lib\site-packages\selenium\webdriver\remote\webelement.py",line 633,in _execute return self._parent.execute(command,params) File "C:\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py",line 321,in execute self.error_handler.check_response(response) File "C:\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py",line 242,in check_response raise exception_class(message,screen,stacktrace) selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document ==null==墓碑。

当我收到来自手机 id 列的推文时,它是主列。

是的,一定是这样。 Cassandra 中的主键组件不能为 null。正确发送,你应该没问题。

注意:当使用 DELETE 等工具查询 Cassandra 并显示没有值的列时,将出现 cqlsh。这并不意味着实际上存在空值。显示 null 是 Cassandra 表达“无数据”的方式。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res