运行时错误:无法启动新线程

如何解决运行时错误:无法启动新线程

我的目标是在 python 中使用 Scikit-Optimize 库来最小化函数值,以便找到 xgboost 模型的优化参数。该过程涉及使用不同的随机参数运行模型 5,000 次。

但是,循环似乎在某个时候停止并给了我一个 RuntimeError: can't start new thread。我正在使用 ubuntu 20.04 并运行 python 3.8.5,Scikit-Optimize 版本是 0.8.1。我在 Windows 10 中运行了相同的代码,似乎我没有遇到这个 RuntimeError,但是,代码运行速度要慢得多。

我想我可能需要一个线程池来解决这个问题,但是在通过网络搜索之后,我没有找到实现线程池的解决方案。

以下是代码的简化版本:

#This function will be passed to Scikit-Optimize to find the optimized parameters (Params)

def find_best_xgboost_para(params):`
        
        #Defines the parameters that I want to optimize

        learning_rate,gamma,max_depth,min_child_weight,reg_alpha,reg_lambda,subsample,max_bin,num_parallel_tree,colsamp_lev,colsamp_tree,StopSteps\
        =float(params[0]),float(params[1]),int(params[2]),int(params[3]),\
        int(params[4]),int(params[5]),float(params[6]),int(params[7]),int(params[8]),float(params[9]),float(params[10]),int(params[11])
                        
        
        xgbc=XGBClassifier(base_score=0.5,booster='gbtree',colsample_bylevel=colsamp_lev,colsample_bytree=colsamp_tree,gamma=gamma,learning_rate=learning_rate,max_delta_step=0,max_depth=max_depth,min_child_weight=min_child_weight,missing=None,n_estimators=nTrees,objective='binary:logistic',random_state=101,reg_alpha=reg_alpha,reg_lambda=reg_lambda,scale_pos_weight=1,seed=101,subsample=subsample,importance_type='gain',gpu_id=GPUID,max_bin=max_bin,tree_method='gpu_hist',num_parallel_tree=num_parallel_tree,predictor='gpu_predictor',verbosity=0,\
               refresh_leaf=0,grow_policy='depthwise',process_type=TreeUpdateStatus,single_precision_histogram=SinglePrecision)
        
        tscv = TimeSeriesSplit(CV_nSplit)
        
        error_data=xgboost.cv(xgbc.get_xgb_params(),CVTrain,num_boost_round=CVBoostRound,nfold=None,stratified=False,folds=tscv,metrics=(),\
                   obj=None,feval=f1_eval,maximize=False,early_stopping_rounds=StopSteps,fpreproc=None,as_pandas=True,\
                   verbose_eval=True,show_stdv=True,shuffle=shuffle_trig)
    
        eval_set = [(X_train,y_train),(X_test,y_test)]
        xgbc.fit(X_train,y_train,eval_metric=f1_eval,eval_set=eval_set,verbose=True)
        
        xgbc_predictions=xgbc.predict(X_test)
        

        error=(1-metrics.f1_score(y_test,xgbc_predictions,average='macro'))
        del xgbc
 
        return error

    #Define the range of values that Scikit-Optimize can choose from to find the optimized parameters

    lr_low,lr_high=float(XgParamDict['lr_low']),float(XgParamDict['lr_high'])
    gama_low,gama_high=float(XgParamDict['gama_low']),float(XgParamDict['gama_high'])
    depth_low,depth_high=int(XgParamDict['depth_low']),int(XgParamDict['depth_high'])
    child_weight_low,child_weight_high=int(XgParamDict['child_weight_low']),int(XgParamDict['child_weight_high'])
    alpha_low,alpha_high=int(XgParamDict['alpha_low']),int(XgParamDict['alpha_high'])
    lambda_low,lambda_high=int(XgParamDict['lambda_low']),int(XgParamDict['lambda_high'])
    subsamp_low,subsamp_high=float(XgParamDict['subsamp_low']),float(XgParamDict['subsamp_high'])
    max_bin_low,max_bin_high=int(XgParamDict['max_bin_low']),int(XgParamDict['max_bin_high'])
    num_parallel_tree_low,num_parallel_tree_high=int(XgParamDict['num_parallel_tree_low']),int(XgParamDict['num_parallel_tree_high'])
    colsamp_lev_low,colsamp_lev_high=float(XgParamDict['colsamp_lev_low']),float(XgParamDict['colsamp_lev_high'])
    colsamp_tree_low,colsamp_tree_high=float(XgParamDict['colsamp_tree_low']),float(XgParamDict['colsamp_tree_high'])
    StopSteps_low,StopSteps_high=float(XgParamDict['StopSteps_low']),float(XgParamDict['StopSteps_high'])

    #Pass the target function (find_best_xgboost_para) as well as parameter ranges to Scikit-Optimize,'res' will be an array of values that will need to be pass to another function

    res=gbrt_minimize(find_best_xgboost_para,[(lr_low,lr_high),(gama_low,gama_high),(depth_low,depth_high),(child_weight_low,child_weight_high),\
                              (alpha_low,alpha_high),(lambda_low,lambda_high),(subsamp_low,subsamp_high),(max_bin_low,max_bin_high),\
                              (num_parallel_tree_low,num_parallel_tree_high),(colsamp_lev_low,colsamp_lev_high),(colsamp_tree_low,colsamp_tree_high),\
                              (StopSteps_low,StopSteps_high)],n_calls=5000,n_random_starts=1500,verbose=True,n_jobs=-1) 

以下是错误信息:

Traceback (most recent call last):

File "/home/FactorOpt.py",line 91,in <module>Opt(**FactorOptDict)

File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/gbrt.py",line 179,in gbrt_minimize return base_minimize(func,dimensions,base_estimator,File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/base.py",line 301,in base_minimize
  next_y = func(next_x)

File "/home/anaconda3/lib/python3.8/modelling/FactorOpt.py",line 456,in xgboost_opt
res=gbrt_minimize(find_best_xgboost_para,\

File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/gbrt.py",in gbrt_minimize
return base_minimize(func,line 302,in base_minimize
result = optimizer.tell(next_x,next_y)

File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/optimizer.py",line 493,in tell
return self._tell(x,y,fit=fit)

File "/home/anaconda3/lib/python3.8/site-packages/skopt/optimizer/optimizer.py",line 536,in _tell
est.fit(self.space.transform(self.Xi),self.yi)

File "/home/anaconda3/lib/python3.8/site-packages/skopt/learning/gbrt.py",line 85,in fit
self.regressors_ = Parallel(n_jobs=self.n_jobs,backend='threading')(

File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py",line 1048,in __call__
if self.dispatch_one_batch(iterator):

File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py",line 866,in dispatch_one_batch
self._dispatch(tasks)

File "/home/anaconda3/lib/python3.8/site-packages/joblib/parallel.py",line 784,in _dispatch
job = self._backend.apply_async(batch,callback=cb)

File "/home/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py",line 252,in apply_async
return self._get_pool().apply_async(

File "/home/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py",line 407,in _get_pool
self._pool = ThreadPool(self._n_jobs)

File "/home/anaconda3/lib/python3.8/multiprocessing/pool.py",line 925,in __init__
Pool.__init__(self,processes,initializer,initargs)

File "/home/anaconda3/lib/python3.8/multiprocessing/pool.py",line 232,in __init__
self._worker_handler.start()

File "/home/anaconda3/lib/python3.8/threading.py",line 852,in start
_start_new_thread(self._bootstrap,())

RuntimeError: can't start new thread

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res