如何解决Spark NLP 分类器总是预测同一个类
我正在使用 Spark NLP 训练分类模型。我关注了this tutorial,以下大部分代码都来自那里。
这是我的训练脚本:
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from sparknlp.annotator import *
from sparknlp.common import *
from sparknlp.base import *
import pandas as pd
import sparknlp
spark = sparknlp.start(gpu=True)
# has only 2 columns: category and description
DF = spark.read \
.option("header",True) \
.csv("data.csv")
(trainingData,testData) = DF.randomSplit([0.7,0.3],seed = 100)
document_assembler = DocumentAssembler() \
.setInputCol("description") \
.setoutputCol("document")
sent_embeddings = BertSentenceEmbeddings.pretrained("sent_biobert_clinical_base_cased","en") \
.setInputCols("document") \
.setoutputCol("sentence_embeddings")
classsifierdl = ClassifierDLApproach()\
.setInputCols("sentence_embeddings")\
.setoutputCol("class")\
.setLabelColumn("category")\
.setMaxEpochs(5)\
.setLr(0.5)\
.setDropout(0.5)\
.setEnableOutputLogs(True)
clf_pipeline = Pipeline(
stages=[document_assembler,sent_embeddings,classsifierdl])
clf_pipelineModel = clf_pipeline.fit(trainingData)
from sklearn.metrics import classification_report,accuracy_score
df = clf_pipelineModel.transform(testData).select('category','description',"class.result").toPandas()
df['result'] = df['result'].apply(lambda x: x[0])
print(classification_report(df.category,df.result))
print(accuracy_score(df.category,df.result))
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。