微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

使用PyMC3揭示模拟神经系统的各种特质

如何解决使用PyMC3揭示模拟神经系统的各种特质

我想通过清楚地表明我对某人为我完成所有工作不感兴趣-我只对指导感兴趣。

我正在读一篇论文:https://doi.org/10.1371/journal.pcbi.1007481,作者在其中编写了使用Metropolis-Hastings方法生成随时间变化的神经元状态的二进制矩阵的C代码。然后,他们使用自制的贝叶斯推断,Metropolist-Hastings和Dirichlet Process Prior过程对数据进行分析,以发现系统中存在多少个不同的神经元程序集以及其他一些特质。

我想采用全部用C语言编写的代码,并用Python编写代码,以便比C语言更容易对其进行更改。我已经编写了数据生成器,但是我正在努力编写分析报告,因为每个超级参数都依赖于另外一个和三个其他超参数-其中有神经元集合的数量

我有一个物理和数学学士学位,并且已经花了几个月的时间,但进展甚微。我必须学习大量有关贝叶斯推理,Python,C和现在的PyMC3的信息。

PyMC3具有很多功能包括混合分布,甚至还有Dirichlet流程,可以解决我的超参数难题。但是,PyMC3文档的这一部分是出于比我更高的理解水平而编写的。

有人可以帮我分解一下它们的用法,或者为我提供一些有用的参考资料,以便在未来十年内理解和实施这些方法吗?

这里是我尝试过的示例-也许可以很好地说明我未能掌握的内容

import arviz as az
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt

"""
NOTATION
-N neurons in A assemblies over M time frames.
-Each neuron belongs to one assembly. The vector t is the assembly for each neuron.
-w is an M x A matrix which is the state of all assemblies over time [w_ma is either on (1) or off (0)]
-mu_vec is the set of neurons assigned to a specific assembly mu
-Vectors of parameters are denoted by dropping the index (like x[1] is an element of x)
-Neuronal  memberships and assembly activity matrix are unobserved variables we want to estimate from observed neuronal 
data.

GENERATION
- With a probability unique to each assembly,choose mu[i] from a categorical distribution..
- At time k,from a Bernoulli distribution,choose whether assembly mu is on or off (w_mu,k) with probability p[mu]-the 
probability that assembly mu is turned on at a specific time k.
- The conditional probability lambda is defined uniquely for the assembly,and is the probability that the ith member of 
assembly mu,at time k is on,given that the entire assembly is either on (one column of the matrix) or off (the other 
column of the matrix) at that same time. Draw the activity of all neurons at all times from this conditional probability 
lambda. lambda[mu][1] is the probability that at least one neuron in aseembly mu will fire when the assembly is on-this
is the level of synchrony in the system.

LIKELIHOOD
- We can calculate joint probability of neuronal membership t,assembly activity matrix w,and neuronal activity matrix
s conditional to model parameters theta. So this is the likelihood of having neuronal membership t AND having specific
activity matrices w and s with a given formula. This looks like a product of priors,giving like a net prior or
something,so I'll work under that assumption.
- Okay,Now they give the distributions used as priors on the different model parameters:

assembly activation probability,p[mu]:             Beta(alpha_mu^(p),beta_mu^(p))
(a)syncronous firing probability,lambda[mu][z]:    Beta(alpha_(z,mu)^(lambda),beta_(z,mu)^(lambda))
    So alpha/beta are dependent on if the neuron in assembly mu is on or off?
Size of each assembly,n = {n1,n2,...,nA}:       Dirichlet(alpha_1^(n),alpha_A^(n))

- alphas and betas above are hyper parameters describing prior kNowledge on the model parameters,like expected temporal
sparisty of synchronous activiation of neuronal populations
"""
# Values for data generation
NCELLS = 400
TIMES = 1000
SEED = 1
outfile = 'out/membership_orig.dat'
lastIsFree = False
# OF values used for data generation,these are to be estimated by PyMC3
NUM_ASSEMBLIES = 4
ACTIVITY = 0.3
LAMBDA0 = 0.04
LAMBDA1 = 0.7


with pm.Model() as unkNown_model:
    # Todo: read through the paper and try to make as many assumptions as possible. Like only one time step,only one
    #  assembly,only one cell,etc. Then slowly incorporate one more quality at a time.
    # PRIOR: Number of assemblies
    """
    Not sure what the distribution should be for the number of assemblies,so I'm going to say there Could be anything 
    between no assemblies and every cell being an assembly. No reason to prefer one answer over another yet.
    """
    num_assemblies_prior = pm.discreteUniform('num_assemblies_prior',lower=0,upper=NCELLS)

    # PRIOR: alpha/beta
    """
    Not sure what distributions are for alpha and beta,so I'll guess half normal. I think it should be different for 
    each parameter and assembly,so I'll randomly choose a standard deviation from a half normal distribution.
    """
    # Todo: Currently I assume that each alpha and beta are from a half normal distribution whose standard deviation is
    #  the same for each assembly corresponding to that parameter.
    alpha_p,alpha_lambda,alpha_n = np.abs(np.random.normal(0,1,3))
    beta_p,beta_lambda,beta_n = np.abs(np.random.normal(0,3))

    # PRIOR: p
    """
    So there are supposed to be mu different priors for p,but we don't kNow mu,so how do we integrate that into 
    a single prior? Also,alpha and beta themselves should be distributions unique to each assembly. To dela with a 
    case like this with a variable number of assemblies,the Dirichlet Process was implimented. So try to see if we 
    can make the number of parameters more dynamic with PyMC3. Maybe fix the numebr of assemblies first isntead of 
    trying to make it dynamic. Also,consider switching to (Num)Pyro isntead of PyMC3,since thanos halted development.
    https://docs.pymc.io/notebooks/data_container.html is a doc maybe about PyMC3 dyanamics. nesting priors might also 
    work. Taking baby steps is totally fine - it's better to get somewhere than Nowhere!
    """
    p_prior = pm.Beta('p_prior',alpha_p,beta_p)

    x = [num_assemblies_prior,p_prior]

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。