如何解决将图像文件写入tfrecords时,列表索引超出范围
100%|########################################################################################################################################| 116/116 [02:40<00:00,1.39s/it]
69%|##############################################################################################1 | 79/115 [01:58<00:53,1.50s/it]
Traceback (most recent call last):
File "write_tfrecords_for_inference.py",line 54,in <module>
video_id = int(seq[0].split('_')[0])
IndexError: list index out of range
if not os.path.isdir(tfrecord_folder):
os.makedirs(tfrecord_folder)
data_point_names = dpc.get_data_point_names(args.data_dir,in_sequences=True,longest_seq=args.longest_seq)
splits = [[] for _ in range(args.n_divides)]
for i in range(len(data_point_names)):
splits[i%args.n_divides].append(data_point_names[i])
for i in range(len(splits)):
with tf.python_io.TFRecordWriter(
os.path.join(tfrecord_folder,"cameras_%d.tfrecords" % i )) as writer:
for seq in tqdm(splits[i]):
camera_features = list()
feature_map_features = list()
gazemap_features = list()
gaze_ps_features = list()
video_id = int(seq[0].split('_')[0])
predicted_time_point_features = list()
weight_features = list()
for j in range(len(seq) - args.n_future_steps):
# write camera images
camera = cv2.imread(os.path.join(camera_folder,seq[j]+'.jpg'))
camera = cv2.resize(
camera,tuple(args.image_size[::-1]),interpolation=cv2.INTER_LINEAR
)
camera = camera[:,:,::-1] # flip bgr
camera_features.append(_bytes_feature(camera.tostring()))
# write frame names
time_point = int(seq[j+args.n_future_steps].split('_')[1])
predicted_time_point_features.append(_int64_feature(time_point))
feature_lists = {'cameras': tf.train.FeatureList(feature=camera_features),'gaze_ps': tf.train.FeatureList(feature=gaze_ps_features),'predicted_time_points': \
tf.train.FeatureList(feature=predicted_time_point_features),'weights': tf.train.FeatureList(feature=weight_features)}
features = {'video_id': tf.train.Feature(int64_list=tf.train.Int64List(value=[video_id]))}
example = tf.train.SequenceExample(
context=tf.train.Features(feature=features),feature_lists=tf.train.FeatureLists(feature_list=feature_lists))
writer.write(example.SerializetoString())
此代码适用于小型数据集并创建tfrecord文件,但在处理大型数据集时会引发上述错误。也许是numpy的内存问题,但我无法找到确切的原因。请让我知道我需要在哪里改进代码。
我正在使用numpy1.18.7和tensorflow gpu1.5。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。