SCS 和 MOSEK 求解器继续运行

如何解决SCS 和 MOSEK 求解器继续运行

我的应用程序使用 ECOS 求解器很长时间了,突然间我们开始得到不可行的解决方案,从而以求解器错误告终。在网上查看了一些堆栈和建议,我看到了针对 MOSEK 和 SCS 求解器的建议。

我尝试将 ECOS 替换为 SCS 和 MOSEK 求解器,但我的运行永无止境。通常我的运行会在 2 小时内结束,但在更换后运行了大约 8 小时并且永无止境。请推荐我。

下面是参数,

'solver': {'name': 'MOSEK','backup_name': 'SCS','详细':对, 'max_iters':3505}

请帮忙

错误日志:

作业因阶段失败而中止:阶段 6.0 中的任务 1934 失败 4 次,最近失败:阶段 6.0 中丢失任务 1934.3(TID 5028,ip-10-219-208-218.ec2.internal,执行器 1) :org.apache.spark.api.python.PythonException:回溯(最近一次调用): 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/model.py”,第262行,合适 引发求解器错误 cvxpy.error.SolverError

在处理上述异常的过程中,又发生了一个异常:

回溯(最近一次调用最后一次): 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/expressions/constants/constante_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/expressions/constants/constant24_1618545751422_0044_02_000002 ev = SA_eigsh(sigma) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/expressions/constants/constant23” return eigsh(A,k=1,sigma=sigma,return_eigenvectors=False) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/scipy/sparse/linalg/argenpack,inarg7,inarg7/eigen. 参数.迭代() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/scipy/sparse/linalg/linepack/eigen7,inalg/linepack5/7.py” self._raise_no_convergence() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/scipy/sparse/linalg/argence_7,linepack/argence_751422_0044_02_000002” 提高 ArpackNoConvergence(msg % (num_iter,k_ok,self.k),ev,vec) scipy.sparse.linalg.eigen.arpack.arpack.ArpackNoConvergence:ARPACK 错误 -1:无收敛(361 次迭代,0/1 特征向量收敛)

在处理上述异常的过程中,又发生了一个异常:

回溯(最近一次调用最后一次): 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/pyspark.zip/pyspark/worker.py”,第377行,主目录 过程() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/pyspark.zip/pyspark/worker.py”,第372行,处理中 serializer.dump_stream(func(split_index,iterator),outfile) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/pyspark.zip/pyspark/serializers.py”,第400行,在dump_stream vs = list(itertools.islice(iterator,batch)) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/pyspark.zip/pyspark/util.py”,第113行,包装器 返回 f(*args,**kwargs) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000001/py_dependencies.zip/pyspark_scripts/spark_tf_pipeline.py”,第49行,在 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/tf_get_from_smu_records38_getcord_inf_get_from_smu_records.pyfrom”, 数据点、当前日期字符串、参数) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/tf_get_from_smu_records24_from_fsmu_records.py”,模型输出,_ = 拟合模型(ts_wrapper,参数) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/fit_model.py”,第13行,在fit_model machine_model.fit() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/machine_model.py”,第62行,合适 self._fit() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/machine_model.py”,第120行,在_fit self.model.fit() 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/model.py”,第267行,合适 self._fit(self.solver_params['backup_name']) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/py_dependencies.zip/cat/tf/tf_model/model.py”,第245行,_fit 盛宴_inacc=tols['feastol_inacc']) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/problems/problem4.py1”,in line solve.py 返回 solve_func(self,*args,**kwargs) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/problems/problem818”,在line_solve818中 自我,数据,warm_start,详细,kwargs) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/reductions/solvers/solving_1618545751422_0044_02_000001 solver_opts,problem._solver_cache) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/reductions/solvers/conicopt,cvxpy/reductions/solvers/conicopt”,cvxpy/reductions/solvers/conicopt 如果 self.remove_redundant_rows(data) == s.INFEASIBLE: 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/reductions/solvers/conicopt_1618545751422_0044_02_000002” eig = extremal_eig_near_ref(gram,ref=TOL) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/expressions/constants/constante_1618545751422_0044_02_00万 ev = SA_eigsh(sigma) 文件“envpath/appcache/application_1618545751422_0044/container_1618545751422_0044_02_000002/miniconda/envs/project/lib/python3.6/site-packages/cvxpy/expressions/constants/constant23” return eigsh(A,vec) scipy.sparse.linalg.eigen.arpack.arpack.ArpackNoConvergence:ARPACK 错误 -1:无收敛(361 次迭代,0/1 特征向量收敛)

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:592)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:575)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:227)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$3.apply(ShuffleExchangeExec.scala:283)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$3.apply(ShuffleExchangeExec.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

驱动程序堆栈跟踪:

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)> insert overwrite table dwd_trade_cart_add_inc > select data.id, > data.user_id, > data.course_id, > date_format(
错误1 hive (edu)> insert into huanhuan values(1,'haoge'); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive> show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 <configuration> <property> <name>yarn.nodemanager.res