微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何通过Snakefile中的输入并行化规则具有多个输出

如何解决如何通过Snakefile中的输入并行化规则具有多个输出

我对Snakemake如何在规则中并行化作业感到非常困惑。我想为每个输入使用一个核心(分别处理输入,而不在它们之间留有间隔),为每个输入提供多个输出

这是我的代码的简化示例:

# Globals ---------------------------------------------------------------------

datasets = ["dataset_S1","dataset_S2"]
methods = ["pbs","pbs_windowed","ihs","xpehh"]

# Rules ----------------------------------------------------------------------- 

rule all:
    input:
        # Binary files
        expand("{dataset}/results_bin/persnp/{method}.feather",dataset=datasets,method=methods),expand("{dataset}/results_bin/pergene/{method}.feather",method=methods)


rule bin:
    input:
        "{dataset}/results_bin/convert2feather.R"
    output:
        "{dataset}/results_bin/persnp/{method}.feather","{dataset}/results_bin/pergene/{method}.feather"
    threads:
        2
    shell:
        "Rscript {input}

如果使用snakemake -j2运行上面的代码,我最终将每个输出方法重新运行每个脚本,这不是我想要的。如果我也将expand()函数用于bin规则中的输入和输出,我最终将使用:

shell:
    """
    Rscript {input[0]}
    Rscript {input[1]}
    """

我认为不可能并行化。

我应该怎么做才能分别获取每个输入,以便可以为每个输入使用一个内核?
任何帮助将不胜感激。谢谢!

编辑

试图更好地解释我的脚本的功能以及我对Snakemake的期望。参见我的示例文件夹结构:

.
├── dataset_S1
│   ├── data
│   │   └── data.vcf
│   ├── results_bin
│   │   └── convert2feather.R
│   ├── task2
│   │   └── script.py
│   └── task3
│       └── script.sh
└── dataset_S2
    ├── data
    │   └── data.vcf
    ├── results_bin
    │   └── convert2feather.R
    ├── task2
    │   └── script.py
    └── task3
        └── script.sh

如您所见,对于每个数据集,我都有结构相同且命名为脚本的文件夹(尽管脚本的内容可能有所不同)。在我的示例中,脚本将读取“ data.vcf”文件,对其进行操作,然后在相应的数据集文件夹中创建新的文件夹和文件。它将对两个数据集重复全部任务。我想以某种方式可以对文件夹task2,task3等中的脚本执行相同的操作...

例如,在此示例中,我的管道的输出为:

.
├── dataset_S1
│   ├── data
│   │   └── data.vcf
│   └── results_bin
│       ├── convert2feather.R
│       ├── pergene
│       │   ├── ihs.feather
│       │   ├── pbs.feather
│       │   ├── pbs_windowed.feather
│       │   └── xpehh.feather
│       └── persnp
│           ├── ihs.feather
│           ├── pbs.feather
│           ├── pbs_windowed.feather
│           └── xpehh.feather
└── dataset_S2
    ├── data
    │   └── data.vcf
    └── results_bin
        ├── convert2feather.R
        ├── pergene
        │   ├── ihs.feather
        │   ├── pbs.feather
        │   ├── pbs_windowed.feather
        │   └── xpehh.feather
        └── persnp
            ├── ihs.feather
            ├── pbs.feather
            ├── pbs_windowed.feather
            └── xpehh.feather

EDIT2

使用的文件和命令:

(snakemake) cmcouto-silva@datascience-IB:~/cmcouto.silva@usp.br/lab_files/phd_data$ snakemake -j2 -p
# Globals ---------------------------------------------------------------------

datasets = ["dataset_S1",method=methods)

rule bin:
    input:
        "{dataset}/results_bin/convert2feather.R"
    output:
        expand("{{dataset}}/results_bin/persnp/{method}.feather",expand("{{dataset}}/results_bin/pergene/{method}.feather",method=methods)
    threads:
        2
    shell:
        "Rscript {input}"

输出日志:

(snakemake) cmcouto-silva@datascience-IB:~/cmcouto.silva@usp.br/lab_files/phd_data$ snakemake -j2 -p
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 2
Rules claiming more threads will be scaled down.
Job counts:
    count   jobs
    1   all
    2   bin
    3

[Wed Sep 30 23:47:55 2020]
rule bin:
    input: dataset_S1/results_bin/convert2feather.R
    output: dataset_S1/results_bin/persnp/pbs.feather,dataset_S1/results_bin/persnp/pbs_windowed.feather,dataset_S1/results_bin/persnp/ihs.feather,dataset_S1/results_bin/persnp/xpehh.feather,dataset_S1/results_bin/pergene/pbs.feather,dataset_S1/results_bin/pergene/pbs_windowed.feather,dataset_S1/results_bin/pergene/ihs.feather,dataset_S1/results_bin/pergene/xpehh.feather
    jobid: 1
    wildcards: dataset=dataset_S1
    threads: 2

Rscript dataset_S1/results_bin/convert2feather.R
  Package "data.table" successfully loaded!
  Package "magrittr" successfully loaded!
  Package "snpsel" successfully loaded!
[Wed Sep 30 23:48:43 2020]
Finished job 1.
1 of 3 steps (33%) done

[Wed Sep 30 23:48:43 2020]
rule bin:
    input: dataset_S2/results_bin/convert2feather.R
    output: dataset_S2/results_bin/persnp/pbs.feather,dataset_S2/results_bin/persnp/pbs_windowed.feather,dataset_S2/results_bin/persnp/ihs.feather,dataset_S2/results_bin/persnp/xpehh.feather,dataset_S2/results_bin/pergene/pbs.feather,dataset_S2/results_bin/pergene/pbs_windowed.feather,dataset_S2/results_bin/pergene/ihs.feather,dataset_S2/results_bin/pergene/xpehh.feather
    jobid: 2
    wildcards: dataset=dataset_S2
    threads: 2

Rscript dataset_S2/results_bin/convert2feather.R
  Package "data.table" successfully loaded!
  Package "magrittr" successfully loaded!
  Package "snpsel" successfully loaded!
[Wed Sep 30 23:49:41 2020]
Finished job 2.
2 of 3 steps (67%) done

[Wed Sep 30 23:49:41 2020]
localrule all:
    input: dataset_S1/results_bin/persnp/pbs.feather,dataset_S2/results_bin/persnp/pbs.feather,dataset_S1/results_bin/pergene/xpehh.feather,dataset_S2/results_bin/pergene/xpehh.feather
    jobid: 0

[Wed Sep 30 23:49:41 2020]
Finished job 0.
3 of 3 steps (100%) done
Complete log: /home/cmcouto-silva/cmcouto.silva@usp.br/lab_files/phd_data/.snakemake/log/2020-09-30T234755.741940.snakemake.log

(snakemake) cmcouto-silva@datascience-IB:~/cmcouto.silva@usp.br/lab_files/phd_data$ cat /home/cmcouto-silva/cmcouto.silva@usp.br/lab_files/phd_data/.snakemake/log/2020-09-30T234755.741940.snakemake.log
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 2
Rules claiming more threads will be scaled down.
Job counts:
    count   jobs
    1   all
    2   bin
    3

[Wed Sep 30 23:47:55 2020]
rule bin:
    input: dataset_S1/results_bin/convert2feather.R
    output: dataset_S1/results_bin/persnp/pbs.feather,dataset_S1/results_bin/pergene/xpehh.feather
    jobid: 1
    wildcards: dataset=dataset_S1
    threads: 2

Rscript dataset_S1/results_bin/convert2feather.R
[Wed Sep 30 23:48:43 2020]
Finished job 1.
1 of 3 steps (33%) done

[Wed Sep 30 23:48:43 2020]
rule bin:
    input: dataset_S2/results_bin/convert2feather.R
    output: dataset_S2/results_bin/persnp/pbs.feather,dataset_S2/results_bin/pergene/xpehh.feather
    jobid: 2
    wildcards: dataset=dataset_S2
    threads: 2

Rscript dataset_S2/results_bin/convert2feather.R
[Wed Sep 30 23:49:41 2020]
Finished job 2.
2 of 3 steps (67%) done

[Wed Sep 30 23:49:41 2020]
localrule all:
    input: dataset_S1/results_bin/persnp/pbs.feather,dataset_S2/results_bin/pergene/xpehh.feather
    jobid: 0

[Wed Sep 30 23:49:41 2020]
Finished job 0.
3 of 3 steps (100%) done
Complete log: /home/cmcouto-silva/cmcouto.silva@usp.br/lab_files/phd_data/.snakemake/log/2020-09-30T234755.741940.snakemake.log

解决方法

我不确定我是否理解正确,但是在我看来,每个“数据集”输入文件都会有三个“方法”输出文件。如果是这样,这应该可行。

rule bin:
    input:
        "{dataset}/results_bin/convert2feather.R"
    output:
        expand("{{dataset}}/results_bin/persnp/{method}.feather",method=methods),expand("{{dataset}}/results_bin/pergene/{method}.feather",method=methods)
,

我说得对吗,您有两个输入文件(您的脚本,每个数据集一个),并且希望它们并行运行? 如果是这样,您需要给snakemake调用两倍于规则中定义的核心数量。

规则中的threads字段提供您要在每个输入/迭代中在此规则上使用的核心数。因此,第一个数据集将使用2个核心,第二个数据集还将使用2个核心。要并行运行它们,您需要调用snakemake -j4

希望我能理解您的问题,如果没有,请随时纠正我。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。