微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

NETCDF4 文件不会超过 2GB

如何解决NETCDF4 文件不会超过 2GB

我有一个不超过 2GB 的 NETCDF4 文件

我正在使用以下示例数据 - 我正在将 200 多个 txt 文件转换为 netcdf4 文件

STATIONS_ID;MESS_DATUM;  QN;FF_10;DD_10;eor
       3660;201912150000;    3;   4.6; 170;eor
       3660;201912150010;    3;   4.2; 180;eor
       3660;201912150020;    3;   4.3; 190;eor
       3660;201912150030;    3;   5.2; 190;eor
       3660;201912150040;    3;   5.1; 190;eor
       3660;201912150050;    3;   4.8; 190;eor

代码如下:

files = [f for f in os.listdir('.') if os.path.isfile(f)]
count = 0 
for f in files:

    filecp = open(f,"r",encoding="ISO-8859-1")
    
    
# NC file setup
    mydata = netCDF4.Dataset('v5.nc','w',format='NETCDF4')
    
    mydata.description = 'Measurement Data'
    
    mydata.createDimension('STATION_ID',None)
    mydata.createDimension('MESS_DATUM',None)
    mydata.createDimension('QN',None)
    mydata.createDimension('FF_10',None)
    mydata.createDimension('DD_10',None)
    
    STATION_ID = mydata.createVariable('STATION_ID',np.short,('STATION_ID'))
    MESS_DATUM = mydata.createVariable('MESS_DATUM',np.long,('MESS_DATUM'))
    QN = mydata.createVariable('QN',np.byte,('QN'))
    FF_10 = mydata.createVariable('FF_10',np.float64,('FF_10'))
    DD_10 = mydata.createVariable('DD_10',('DD_10'))
    
    STATION_ID.units = ''
    MESS_DATUM.units = 'Central European Time yyyymmddhhmi'
    QN.units = ''
    FF_10.units = 'meters per second'
    DD_10.units = "degree"
    
    txtdata = pd.read_csv(filecp,delimiter=';').values
    
    #txtdata = np.genfromtxt(filecp,dtype=None,delimiter=';',names=True,encoding=None)
    if len(txtdata) > 0:
        
        df = pd.DataFrame(txtdata)

        sh = txtdata.shape
        print("txtdata shape is ",sh)
    
        mydata['STATION_ID'][:] = df[0]
        mydata['MESS_DATUM'][:] = df[1]
        mydata['QN'][:] = df[2]
        mydata['FF_10'][:] = df[3]
        mydata['DD_10'][:] = df[4]
    
        
    mydata.close()
    filecp.close()
    count +=1

解决方法

您的问题是您在循环中创建了相同的文件。因此,您的文件大小仅限于最大的初始数据文件。

打开文件一次,将每个新数据添加到 netcdf 数据数组的末尾。

如果你在第一个文件中得到 124 个值,你输入:

mydata['STATION_ID'][0:124] = df[0]

你从第二个文件中得到 224,你把

mydata['STATION_ID'][124:124+224] = df[0]

因此,如果数据文件从 https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/10_minutes/wind/recent/ 下载到 <text file path>

import netCDF4
import codecs
import pandas as pd
import os
import numpy as np


mydata = netCDF4.Dataset('v5.nc','w',format='NETCDF4')
mydata.description = 'Wind Measurement Data'
mydata.createDimension('STATION_ID',None)
mydata.createDimension('MESS_DATUM',None)
mydata.createDimension('QN',None)
mydata.createDimension('FF_10',None)
mydata.createDimension('DD_10',None)

STATION_ID = mydata.createVariable('STATION_ID',np.short,('STATION_ID'))
MESS_DATUM = mydata.createVariable('MESS_DATUM',np.long,('MESS_DATUM'))
QN = mydata.createVariable('QN',np.byte,('QN'))
FF_10 = mydata.createVariable('FF_10',np.float64,('FF_10'))
DD_10 = mydata.createVariable('DD_10',('DD_10'))

STATION_ID.units = ''
MESS_DATUM.units = 'Central European Time yyyymmddhhmi'
QN.units = ''
FF_10.units = 'meters per second'
DD_10.units = "degree"    
fpath = <text file path>
files = [f for f in os.listdir(fpath)]
count = 0 
mydata_startindex=0
for f in files:
    filecp = open(fpath+f,"r",encoding="ISO-8859-1")
    txtdata = pd.read_csv(filecp,delimiter=';')
    chunksize = len(txtdata)
    if len(txtdata) > 0:          
        mydata['STATION_ID'][mydata_startindex:mydata_startindex+chunksize] = txtdata['STATIONS_ID']
        mydata['MESS_DATUM'][mydata_startindex:mydata_startindex+chunksize] = txtdata['MESS_DATUM']
        mydata['QN'][mydata_startindex:mydata_startindex+chunksize] = txtdata['  QN']
        mydata['FF_10'][mydata_startindex:mydata_startindex+chunksize] = txtdata['FF_10']
        mydata['DD_10'][mydata_startindex:mydata_startindex+chunksize] = txtdata['DD_10']
        mydata_startindex += chunksize

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。