微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

获得可用于在差分进化工作者= -1

如何解决获得可用于在差分进化工作者= -1

#I编辑我的原文,以举一个简单的例子。 我使用Scipy的差分进化(DE)来优化某些参数。 我想在此任务中使用所有PC处理器,并尝试使用选项“ workers = -1”

要求的条件是DE调用函数必须是可腌制的。

如果我在https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution中运行示例,则优化工作正常。

from scipy.optimize import rosen,differential_evolution
import pickle
import dill

bounds = [(0,2),(0,2)]
result = differential_evolution(rosen,bounds,updating='deferred',workers=-1)
result.x,result.fun
(array([1.,1.]),0.0)

但是,如果我定义了自定义函数“ Ros_custom”,则优化会崩溃(不会给出结果)

def Ros_custom(X):
    x = X[0]
    y = X[1]
    a = 1. - x
    b = y - x*x
    return a*a + b*b*100

result = differential_evolution(Ros_custom,workers=-1)

如果我尝试pickle.dumps和pickle.loads'Ros_custom',我会得到相同的行为(优化崩溃,没有答案)。

如果我用莳萝

Ros_pick_1=dill.dumps(Ros_custom)
Ros_pick_2=dill.loads(Ros_pick_1)
result = differential_evolution(Ros_pick_2,result.fun

我收到以下消息错误

PicklingError: Can't pickle <function Ros_custom at 0x0000020247F04C10>: it's not the same object as __main__.Ros_custom

我的问题是: 为什么会出现错误?以及是否有一种方法可以使“ Ros_custom”可腌制以便在DE中使用所有PC处理器。

预先感谢您的任何建议。

解决方法

两件事:

  1. 除非先腌制/解开自定义功能,否则我无法重现您看到的错误。
  2. 在将自定义函数传递给求解器之前,无需对其进行腌制/解开。

这似乎对我有用。 Python 3.6.12和scipy 1.5.2:

>>> from scipy.optimize import rosen,differential_evolution
>>> bounds = [(0,2),(0,2)]
>>> 
>>> def Ros_custom(X):
...     x = X[0]
...     y = X[1]
...     a = 1. - x
...     b = y - x*x
...     return a*a + b*b*100
... 
>>> result = differential_evolution(Ros_custom,bounds,updating='deferred',workers=-1)
>>> result.x,result.fun
(array([1.,1.]),0.0)
>>> 
>>> result
     fun: 0.0
 message: 'Optimization terminated successfully.'
    nfev: 4953
     nit: 164
 success: True
       x: array([1.,1.])
>>> 

我什至可以在custom目标内部嵌套一个函数:

>>> def foo(a,b):
...   return a*a + b*b*100
... 
>>> def custom(X):
...   x,y = X[0],X[1]
...   return foo(1.-x,y-x*x)
... 
>>> result = differential_evolution(custom,workers=-1)
>>> result
     fun: 0.0
 message: 'Optimization terminated successfully.'
    nfev: 4593
     nit: 152
 success: True
       x: array([1.,1.])

所以,对我来说,至少代码可以按预期工作。

您无需在scipy中使用该函数之前先对其进行序列化/反序列化。是的,该函数需要可腌制,但是scipy将为您做到这一点。基本上,在幕后发生的事情是,您的函数将被序列化,以字符串形式传递给multiprocessing,然后分配给处理器,然后再进行剔除并在目标处理器上使用。

像这样,对于4组输入,每个处理器运行一次:

>>> import multiprocessing as mp
>>> res = mp.Pool().map(custom,[(0,1),(1,(4,9),(3,4)])
>>> list(res)
[101.0,100.0,4909.0,2504.0]
>>> 

较旧版本的multiprocessing难以序列化解释器中定义的功能,因此通常需要在__main__块中执行代码。如果您在Windows上,通常仍然是这种情况……并且您可能还需要调用mp.freeze_support(),具体取决于scipy中代码的实现方式。

我倾向于喜欢dill(我是作者),因为它可以序列化pickle的更多对象。但是,由于scipy使用multiprocessing,它使用pickle ...我经常选择使用mystic(我是作者),它使用multiprocess (我是作者),它使用dill。大致来说,等价的代码,但是它们都使用dill而不是pickle

>>> from mystic.solvers import diffev2
>>> from pathos.pools import ProcessPool
>>> diffev2(custom,npop=40,ftol=1e-10,map=ProcessPool().map)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 42
         Function evaluations: 1720
array([1.00000394,1.00000836])

借助mystic,您还可以获得一些不错的功能,例如监视器:

>>> from mystic.monitors import VerboseMonitor
>>> mon = VerboseMonitor(5,5)
>>> diffev2(custom,itermon=mon,map=ProcessPool().map)
Generation 0 has ChiSquare: 0.065448
Generation 0 has fit parameters:
 [0.769543181527466,0.5810893880113548]
Generation 5 has ChiSquare: 0.065448
Generation 5 has fit parameters:
 [0.588156685059123,-0.08325052939774935]
Generation 10 has ChiSquare: 0.060129
Generation 10 has fit parameters:
 [0.8387858177101133,0.6850849855634057]
Generation 15 has ChiSquare: 0.001492
Generation 15 has fit parameters:
 [1.0904350077743412,1.2027007403275813]
Generation 20 has ChiSquare: 0.001469
Generation 20 has fit parameters:
 [0.9716429877952866,0.9466681129902448]
Generation 25 has ChiSquare: 0.000114
Generation 25 has fit parameters:
 [0.9784047411865372,0.9554056558210251]
Generation 30 has ChiSquare: 0.000000
Generation 30 has fit parameters:
 [0.996105436348129,0.9934091068974504]
Generation 35 has ChiSquare: 0.000000
Generation 35 has fit parameters:
 [0.996589586891175,0.9938925277204567]
Generation 40 has ChiSquare: 0.000000
Generation 40 has fit parameters:
 [1.0003791956048833,1.0007133195321427]
Generation 45 has ChiSquare: 0.000000
Generation 45 has fit parameters:
 [1.0000170425596364,1.0000396089375592]
Generation 50 has ChiSquare: 0.000000
Generation 50 has fit parameters:
 [0.9999013984263114,0.9998041148375927]
STOP("VTRChangeOverGeneration with {'ftol': 1e-10,'gtol': 1e-06,'generations': 30,'target': 0.0}")
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 54
         Function evaluations: 2200
array([0.99999186,0.99998338])
>>> 

以上所有内容并行运行。

因此,总而言之,代码应该按原样工作(并且无需预先提取)-除非您在Windows上,否则您可能需要使用freeze_support并在{{1 }}块。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。