微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Pandas:在多列中查找具有匹配值的行的 Pythonic 方法分层条件

如何解决Pandas:在多列中查找具有匹配值的行的 Pythonic 方法分层条件

抱歉标题有点不清楚。言语使我无法简洁地描述这个问题。希望我下面的描述可以帮助澄清。欢迎对标题进行任何澄清编辑。

我正在尝试从 Pandas 数据帧创建 networkx 流程图。数据框记录订单如何流经多家公司。数据框中的大多数行都是连接的,并且连接表现在多列中。样本数据如下:

df = pd.DataFrame({'Company': ['A','A','B','C','C'],'event_type':['new','route','receive','execute','execute'],'event_id': ['110','120','200','210','220','300','310'],'prior_event_id': [np.nan,'110',np.nan,'300'],'route_id': [np.nan,'foo','bar',np.nan]}
             )

数据框如下所示:

  Company event_type event_id prior_event_id route_id
0       A        new      110            NaN      NaN
1       A      route      120            110      foo
2       B    receive      200            NaN      foo
3       B    execute      210            120      NaN
4       B      route      220            210      bar
5       C    receive      300            NaN      bar
6       C    execute      310            300      NaN

订单经过 3 个公司:A、B、C。并且在每个公司内,后面的事件可以通过 event_id - prior_event_id链接到其源事件。但是这种方法不适用于属于不同公司的记录。例如,第 1 行和第 2 行将仅通过一列 route_id 进行匹配。因此,我试图重新创建的链接机制是一种分层机制,因为如果 route_id - event_id 列对不产生任何结果,我将仅使用列 prior_event_id 进行匹配。

下图可能有助于说明链接机制:

sample illustration

我的解决方案很笨拙:

# Make every event unique so as to not confound the linking
df['event_sub'] = df.groupby(df.event_type).cumcount()+1 
df['event'] = df.event_type + ' ' + df.event_sub.astype(str) 

# Find the match based on first matching criterion
replace_dict_event = dict(df[['event_id','event']].values)
df['source'] = df['prior_event_id'].apply(lambda x: replace_dict_event.get(x) if replace_dict_event.get(x) else np.nan )
df['target'] = df['event_id'].apply(lambda x: replace_dict_event.get(x) if replace_dict_event.get(x) else np.nan )

# From last step,find the match based on second matching criterion for the unmatched rows 
replace_dict_rtd = dict(df[df.event_type == 'route'][['route_id','event']].values)
df.loc[df.event_type == 'receive','source'] = df[df.event_type == 'receive']['route_id'].apply(lambda x: replace_dict_rtd.get(x))
df

我基本上使用了 apply 两次来逐步获得匹配。我想知道是否有更干净、更 Pythonic 的方法来做到这一点。

我的结果如下所示:

re1

以及我由此创建的 networkx 图:

flow

解决方法

您有两种不同类型的链接:a) 通过匹配 prior_event_idevent_id 定义的链接,以及 b) 由 route_id 定义的链接。使用两组不同的命令来提取两种不同类型的关系 pythonic(或只是简单的良好编码实践)。

话虽如此,由于您正在处理表格数据,因此最好使用合并(特别是内部联接)来提取链接——而不是使用带有应用的字典查找。表格数据的数据库针对此类查询进行了优化,而大型数据集的查找速度会慢得多。

#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

if __name__ == '__main__':

    df = pd.DataFrame({'Company': ['A','A','B','C','C'],'event_type':['new','route','receive','execute','execute'],'event_id': ['110','120','200','210','220','300','310'],'prior_event_id': [np.nan,'110',np.nan,'300'],'route_id': [np.nan,'foo','bar',np.nan]}
    )

    # --------------------------------------------------------------------------------
    # a) links established by matching event_id with prior_event_id
    df2 = pd.merge(df,df,left_on='event_id',right_on='prior_event_id',how='inner')

    #       Company_x event_type_x event_id_x prior_event_id_x route_id_x Company_y event_type_y event_id_y prior_event_id_y route_id_y
    # 0         A          new        110              NaN        NaN         A        route        120              110        foo
    # 1         A        route        120              110        foo         B      execute        210              120        NaN
    # 2         B      execute        210              120        NaN         B        route        220              210        bar
    # 3         C      receive        300              NaN        bar         C      execute        310              300        NaN

    # --------------------------------------------------------------------------------
    # b) links established by matching route_id

    # remove events without route ids
    valid = df['route_id'].notna()
    df3 = df['valid']

    #   Company event_type event_id prior_event_id route_id
    # 1       A      route      120            110      foo
    # 2       B    receive      200            NaN      foo
    # 4       B      route      220            210      bar
    # 5       C    receive      300            NaN      bar

    # join on route_id
    df4 = pd.merge(df3,df3,on='route_id',how='inner')

    #   Company_x event_type_x event_id_x prior_event_id_x route_id Company_y event_type_y event_id_y prior_event_id_y
    # 0         A        route        120              110      foo         A        route        120              110
    # 1         A        route        120              110      foo         B      receive        200              NaN
    # 2         B      receive        200              NaN      foo         A        route        120              110
    # 3         B      receive        200              NaN      foo         B      receive        200              NaN
    # 4         B        route        220              210      bar         B        route        220              210
    # 5         B        route        220              210      bar         C      receive        300              NaN
    # 6         C      receive        300              NaN      bar         B        route        220              210
    # 7         C      receive        300              NaN      bar         C      receive        300              NaN

    # remove cases where a company was matched to itself
    valid = df4['Company_x'] != df4['Company_y']
    df5 = df4[valid]

    #       Company_x event_type_x event_id_x prior_event_id_x route_id Company_y event_type_y event_id_y prior_event_id_y
    # 1         A        route        120              110      foo         B      receive        200              NaN
    # 2         B      receive        200              NaN      foo         A        route        120              110
    # 5         B        route        220              210      bar         C      receive        300              NaN
    # 6         C      receive        300              NaN      bar         B        route        220              210

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。