如何在 pyspark ml 管道中的列子集上使用 StandardScaler?

如何解决如何在 pyspark ml 管道中的列子集上使用 StandardScaler?

在我的数据框中,一些列是连续值,而其他列只有 0/1 值。我想在使用 Pipeline 进行逻辑回归之前在连续列上使用 StandardScaler。如何实现代码?

我尝试:

from pyspark.ml.feature import VectorAssembler,StandardScaler
from pyspark.ml import Pipeline,Transformer
from pyspark.sql.functions import udf,col
from pyspark.sql.types import FloatType,ArrayType
from pyspark.ml.util import DefaultParamsWritable,DefaultParamsReadable
from pyspark.ml.param.shared import HasInputCol,HasOutputCol,Param,Params,TypeConverters

class StandardScalerSubset(Transformer,DefaultParamsReadable,DefaultParamsWritable):
    """
    A custom Transformer which use StandardScaler on subset of features.
    """
    def __init__(self,to_scale_cols,remaining_cols):
        super(StandardScalerSubset,self).__init__()
        self.to_scale_cols = to_scale_cols  # continuous columns to be scaled
        self.remaining_cols = remaining_cols  # other columns

    def _transform(self,data):
        va = VectorAssembler().setInputCols(self.to_scale_cols).setOutputCol("to_scale_vector")
        data_va = va.transform(data)

        scaler = StandardScaler(inputCol="to_scale_vector",outputCol="scaled_vector",withMean=True,withStd=True)
        scaler_model = scaler.fit(data_va)
        data_scaled = scaler_model.transform(data_va)

        vector2list = udf(lambda x: x.toArray().tolist(),ArrayType(FloatType()))
        # return all columns
        data_res = data_scaled.withColumn("scaled_list",vector2list("scaled_vector")) \
            .select(self.remaining_cols
                    + [col("scaled_list").getItem(i).alias(c) for (i,c) in enumerate(self.scale_cols)])
        return data_res

对于输入:

# +---+---+---+---+
# |  a|  b|  c|  d|
# +---+---+---+---+
# |  1|  5| 10|  0|
# |  0| 10| 20|  1|
# |  1| 15| 25|  0|
# |  0| 30| 40|  1|
# +---+---+---+---+

输出将是:

# +---+---+--------+-----+
# |  a|  d|       b|    c|
# +---+---+--------+-----+
# |  1|  0| -0.9258| -1.1|
# |  0|  1| -0.4629| -0.3|
# |  1|  0|     0.0|  0.1|
# |  0|  1|  1.3887|  1.3|
# +---+---+--------+-----+

它可以这样使用:

scalerFeatures = ['xxx']
featureAr = ['xxx']
remainingFeatures = ['xxx']
sss = StandardScalerSubset(scale_cols=scalerFeatures,remaining_cols=remainingFeatures)
vectorAssembler = VectorAssembler().setInputCols(featureArr).setOutputCol("features")
lrModel = LogisticRegression(featuresCol="features",regParam=0.1,maxIter=100,family="binomial")
pipeline = Pipeline().setStages([sss,vectorAssembler,modelObject])
pipeline.fit(trainData).write().overwrite().save(modelSavePath)

当我使用 PipelineModel.load(modelSavePath) 加载模型时,出现错误。 我认为我应该同时实现fittransform。但是我不知道该怎么做。谁能帮我?谢谢。

解决方法

评论太长了,但您可以尝试以下方法:

from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import LogisticRegression
from pyspark.ml import Pipeline

scalerFeatures = ['b','c']
remainingFeatures = ['a','d']
featureArr = remainingFeatures + [('scaled_' + f) for f in scalerFeatures]

va1 = [VectorAssembler(inputCols=[f],outputCol=('vec_' + f)) for f in scalerFeatures]
ss = [StandardScaler(inputCol='vec_' + f,outputCol='scaled_' + f,withMean=True,withStd=True) for f in scalerFeatures]

va2 = VectorAssembler(inputCols=featureArr,outputCol="features")
lr = LogisticRegression()

stages = va1 + ss + [va2]
# I don't have a label column,but if you do,you can put lr stage at the end:
# stages = va1 + ss + [va2,lr]

p = Pipeline(stages=stages)
p.fit(df).transform(df).show()
+---+---+---+---+------+------+---------------------+----------------------+--------------------------------------------------+
|a  |b  |c  |d  |vec_b |vec_c |scaled_b             |scaled_c              |features                                          |
+---+---+---+---+------+------+---------------------+----------------------+--------------------------------------------------+
|1  |5  |10 |0  |[5.0] |[10.0]|[-0.9258200997725514]|[-1.0999999999999999] |[1.0,0.0,-0.9258200997725514,-1.0999999999999999] |
|0  |10 |20 |1  |[10.0]|[20.0]|[-0.4629100498862757]|[-0.29999999999999993]|[0.0,1.0,-0.4629100498862757,-0.29999999999999993]|
|1  |15 |25 |0  |[15.0]|[25.0]|[0.0]                |[0.09999999999999998] |[1.0,0.09999999999999998]                 |
|0  |30 |40 |1  |[30.0]|[40.0]|[1.3887301496588271] |[1.2999999999999998]  |[0.0,1.3887301496588271,1.2999999999999998]   |
+---+---+---+---+------+------+---------------------+----------------------+--------------------------------------------------+

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)> insert overwrite table dwd_trade_cart_add_inc > select data.id, > data.user_id, > data.course_id, > date_format(
错误1 hive (edu)> insert into huanhuan values(1,'haoge'); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive> show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 <configuration> <property> <name>yarn.nodemanager.res