通过Dataloader枚举时出现KeyError-为什么?

如何解决通过Dataloader枚举时出现KeyError-为什么?

我正在编写一个二进制分类模型,该模型由40个参与者的音频文件组成,并根据他们是否患有语音障碍对其进行分类。音频文件已分为5个第二部分,为避免主题偏见,我将训练/测试/验证集划分为一个主题仅出现在一个集合中(即,参与者ID02不在训练和测试集中都出现) 。当我尝试枚举下面代码中的DataLoader validLoader时,出现以下错误,但我不完全确定为什么会发生此错误。有人有什么建议吗?

KeyError                                  Traceback (most recent call last)
<ipython-input-69-55be99283cf7> in <module>()
----> 1 for i,data in enumerate(valid_loader,0):
      2   images,labels = data
      3   print("Batch",i,"size:",len(images))

3 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
    361 
    362     def __next__(self):
--> 363         data = self._next_data()
    364         self._num_yielded += 1
    365         if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
    987             else:
    988                 del self._task_info[idx]
--> 989                 return self._process_data(data)
    990 
    991     def _try_put_index(self):

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _process_data(self,data)
   1012         self._try_put_index()
   1013         if isinstance(data,ExceptionWrapper):
-> 1014             data.reraise()
   1015         return data
   1016 

/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
    393             # (https://bugs.python.org/issue2651),so we work around it.
    394             msg = KeyErrorMessage(msg)
--> 395         raise self.exc_type(msg)

KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py",line 185,in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py",line 44,in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py",in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "<ipython-input-44-245be0a1e978>",line 19,in __getitem__
    x = Image.open(self.df['path'][index])
  File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py",line 871,in __getitem__
    result = self.index.get_value(self,key)
  File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py",line 4405,in get_value
    return self._engine.get_value(s,k,tz=getattr(series.dtype,"tz",None))
  File "pandas/_libs/index.pyx",line 80,in pandas._libs.index.IndexEngine.get_value
  File "pandas/_libs/index.pyx",line 90,line 138,in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi",line 998,in pandas._libs.hashtable.Int64HashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi",line 1005,in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 36

有人可以告知为什么会这样吗?

from google.colab import drive
drive.mount('/content/drive')

import torch
import torchvision
import torch.optim as optim
import torch.nn as nn
import torchvision.transforms as transforms
from torchvision import utils
from  torch.utils.data import Dataset

from sklearn.metrics import confusion_matrix
from skimage import io,transform,data
from skimage.color import rgb2gray

import matplotlib.pyplot as plt
from tqdm import tqdm
from PIL import Image
import pandas as pd
import numpy as np
import csv
import os
import math
import cv2

root_dir = "/content/drive/My Drive/Read_Text/5_Second_Segments/"
class_names = [
  "Parkinsons_Disease","Healthy_Control"
]

def get_meta(root_dir,dirs):
    """ Fetches the meta data for all the images and assigns labels.
    """
    paths,classes = [],[]
    for i,dir_ in enumerate(dirs):
        for entry in os.scandir(root_dir + dir_):
            if (entry.is_file()):
                paths.append(entry.path)
                classes.append(i)
                
    return paths,classes


paths,classes = get_meta(root_dir,class_names)

data = {
    'path': paths,'class': classes
}

data_df = pd.DataFrame(data,columns=['path','class'])
data_df = data_df.sample(frac=1).reset_index(drop=True) # Shuffles the data

from pandas import option_context

print("Found",len(data_df),"images.")

with option_context('display.max_colwidth',400):
    display(data_df.head(100))

class Audio(Dataset):

    def __init__(self,df,transform=None):
        """
        Args:
            image_dir (string): Directory with all the images
            df (DataFrame object): Dataframe containing the images,paths and classes
            transform (callable,optional): Optional transform to be applied
                on a sample.
        """
        self.df = df
        self.transform = transform

    def __len__(self):
        return len(self.df)

    def __getitem__(self,index):
        # Load image from path and get label
        x = Image.open(self.df['path'][index])
        try:
          x = x.convert('RGB') # To deal with some grayscale images in the data
        except:
          pass
        y = torch.tensor(int(self.df['class'][index]))

        if self.transform:
            x = self.transform(x)

        return x,y

def compute_img_mean_std(image_paths):
    """
        Author: @xinruizhuang. Computing the mean and std of three channel on the whole dataset,first we should normalize the image from 0-255 to 0-1
    """

    img_h,img_w = 224,224
    imgs = []
    means,stdevs = [],[]

    for i in tqdm(range(len(image_paths))):
        img = cv2.imread(image_paths[i])
        img = cv2.resize(img,(img_h,img_w))
        imgs.append(img)

    imgs = np.stack(imgs,axis=3)
    print(imgs.shape)

    imgs = imgs.astype(np.float32) / 255.

    for i in range(3):
        pixels = imgs[:,:,:].ravel()  # resize to one row
        means.append(np.mean(pixels))
        stdevs.append(np.std(pixels))

    means.reverse()  # BGR --> RGB
    stdevs.reverse()

    print("normMean = {}".format(means))
    print("normStd = {}".format(stdevs))
    return means,stdevs

norm_mean,norm_std = compute_img_mean_std(paths)

data_transform = transforms.Compose([
        transforms.Resize(256),transforms.CenterCrop(256),transforms.ToTensor(),transforms.Normalize(norm_mean,norm_std),])

unique_users = data_df['path'].str[-20:-16].unique()
train_users,test_users = np.split(np.random.permutation(unique_users),[int(0.8*len(unique_users))])
df_train = data_df[data_df['path'].str[-20:-16].isin(train_users)]
test_data_df = data_df[data_df['path'].str[-20:-16].isin(test_users)]

train_unique_users = df_train['path'].str[-20:-16].unique()
train_users,validate_users = np.split(np.random.permutation(train_unique_users),[int(0.875*len(train_unique_users))])
train_data_df = df_train[df_train['path'].str[-20:-16].isin(train_users)]
valid_data_df = df_train[df_train['path'].str[-20:-16].isin(validate_users)]

ins_dataset_train = Audio(
    df=train_data_df,transform=data_transform,)

ins_dataset_valid = Audio(
    df=valid_data_df,)

ins_dataset_test = Audio(
    df=test_data_df,)

train_loader = torch.utils.data.DataLoader(
    ins_dataset_train,batch_size=8,shuffle=True,num_workers=2
)

test_loader = torch.utils.data.DataLoader(
    ins_dataset_test,batch_size=16,num_workers=2
)

valid_loader = torch.utils.data.DataLoader(
    ins_dataset_valid,num_workers=2
)

//(This is where the error is occurring.)
for i,0):
  images,labels = data
  print("Batch",len(images))

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res