大熊猫将不均匀的每小时数据重新采样到1D或24h箱中

如何解决大熊猫将不均匀的每小时数据重新采样到1D或24h箱中

我有每周的每小时FX数据,需要从星期一到星期四12:00 pm和星期五的21:00重新采样到“ 1D”或“ 24hr”箱中,每周总计5天:

Date                 rate
2020-01-02 00:00:00 0.673355
2020-01-02 01:00:00 0.67311
2020-01-02 02:00:00 0.672925
2020-01-02 03:00:00 0.67224
2020-01-02 04:00:00 0.67198
2020-01-02 05:00:00 0.67223
2020-01-02 06:00:00 0.671895
2020-01-02 07:00:00 0.672175
2020-01-02 08:00:00 0.672085
2020-01-02 09:00:00 0.67087
2020-01-02 10:00:00 0.6705800000000001
2020-01-02 11:00:00 0.66884
2020-01-02 12:00:00 0.66946
2020-01-02 13:00:00 0.6701600000000001
2020-01-02 14:00:00 0.67056
2020-01-02 15:00:00 0.67124
2020-01-02 16:00:00 0.6691699999999999
2020-01-02 17:00:00 0.66883
2020-01-02 18:00:00 0.66892
2020-01-02 19:00:00 0.669345
2020-01-02 20:00:00 0.66959
2020-01-02 21:00:00 0.670175
2020-01-02 22:00:00 0.6696300000000001
2020-01-02 23:00:00 0.6698350000000001
2020-01-03 00:00:00 0.66957

因此,一周中各天的小时数是不均匀的,即“星期一” =周一00:00:00到周一12:00:00,“星期二”(以及周三,周四)=即13星期一:00:00到星期二12:00:00,星期五= 13:00:00到21:00:00

在尝试找到解决方案时,我发现现在不推荐使用基数,并且偏移量/原点方法无法按预期运行,这可能是由于每天的行数不均匀所致:

df.rate.resample('24h',offset=12).ohlc() 

我花了数小时试图找到解决方法

如何简单地将每个12:00:00时间戳之间的所有数据行装到ohlc()列中?

所需的输出看起来像这样:

Out[69]: 
                                   open      high       low     close
2020-01-02 00:00:00.0000000  0.673355  0.673355  0.673355  0.673355
2020-01-03 00:00:00.0000000  0.673110  0.673110  0.668830  0.669570
2020-01-04 00:00:00.0000000  0.668280  0.668280  0.664950  0.666395
2020-01-05 00:00:00.0000000  0.666425  0.666425  0.666425  0.666425

解决方法

使用原点和偏移量作为参数,这就是您要查找的内容

df.resample('24h',origin='start_day',offset='13h').ohlc()

以您的示例为例:

                    open        high        low     close
datetime                
2020-01-01 13:00:00 0.673355    0.673355    0.66884 0.66946
2020-01-02 13:00:00 0.670160    0.671240    0.66883 0.66957
,

由于周期长度不相等,因此IMO必须自己制作映射轮。准确地说,星期一的1.5天长度使得freq='D'无法一次正确进行映射。

手工编写的代码还可以防止在定义明确的时间段之外进行记录。

数据

使用稍有不同的时间戳来演示代码的正确性。这些日子是星期一。到星期五。

import pandas as pd
import numpy as np
from datetime import datetime
import io
from pandas import Timestamp,Timedelta

df = pd.read_csv(io.StringIO("""
                         rate
Date                         
2020-01-06 00:00:00  0.673355
2020-01-06 23:00:00  0.673110
2020-01-07 00:00:00  0.672925
2020-01-07 12:00:00  0.672240
2020-01-07 13:00:00  0.671980
2020-01-07 23:00:00  0.672230
2020-01-08 00:00:00  0.671895
2020-01-08 12:00:00  0.672175
2020-01-08 23:00:00  0.672085
2020-01-09 00:00:00  0.670870
2020-01-09 12:00:00  0.670580
2020-01-09 23:00:00  0.668840
2020-01-10 00:00:00  0.669460
2020-01-10 12:00:00  0.670160
2020-01-10 21:00:00  0.670560
2020-01-10 22:00:00  0.671240
2020-01-10 23:00:00  0.669170
"""),sep=r"\s{2,}",engine="python")

df.set_index(pd.to_datetime(df.index),inplace=True)

代码

def find_day(ts: Timestamp):
    """Find the trading day with irregular length"""

    wd = ts.isoweekday()
    if wd == 1:
        return ts.date()
    elif wd in (2,3,4):
        return ts.date() - Timedelta("1D") if ts.hour <= 12 else ts.date()
    elif wd == 5:
        if ts.hour <= 12:
            return ts.date() - Timedelta("1D")
        elif 13 <= ts.hour <= 21:
            return ts.date()

    # out of range or nulls
    return None

# map the timestamps,and set as new index
df.set_index(pd.DatetimeIndex(df.index.map(find_day)),inplace=True)

# drop invalid values and collect ohlc
ans = df["rate"][df.index.notnull()].resample("D").ohlc()

结果

print(ans)

                open      high       low     close
Date                                              
2020-01-06  0.673355  0.673355  0.672240  0.672240
2020-01-07  0.671980  0.672230  0.671895  0.672175
2020-01-08  0.672085  0.672085  0.670580  0.670580
2020-01-09  0.668840  0.670160  0.668840  0.670160
2020-01-10  0.670560  0.670560  0.670560  0.670560
,

我最终结合使用了grouby和星期几的日期时间来确定我的具体解决方案

# get idxs of time to rebal (12:00:00)-------------------------------------
df['idx'] = range(len(df)) # get row index
days = [] # identify each row by day of week
for i in range(len(df.index)):
    days.append(df.index[i].date().weekday())
df['day'] = days

dtChgIdx = [] # stores "12:00:00" rows
justDates = df.index.date.tolist() # gets just dates
res = [] # removes duplicate dates
[res.append(x) for x in justDates if x not in res]
justDates = res
grouped_dates = df.groupby(df.index.date) # group entire df by dates

for i in range(len(grouped_dates)):
    tempDf = grouped_dates.get_group(justDates[i]) # look at each grouped dates
    if tempDf['day'][0] == 6:
        continue # skip Sundays
    times = [] # gets just the time portion of index
    for y in range(len(tempDf.index)):
        times.append(str(tempDf.index[y])[-8:])
    tempDf['time'] = times # add time column to df
    tempDf['dayCls'] = np.where(tempDf['time'] == '12:00:00',1,0) # idx "12:00:00" row  
    dtChgIdx.append(tempDf.loc[tempDf['dayCls'] == 1,'idx'][0]) # idx value

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res