Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Image Not Found ../dataset/images/train/4501.jpeg #195

Closed
Al-Razi-KR opened this issue Jun 24, 2020 · 26 comments · Fixed by #2042
Closed

AssertionError: Image Not Found ../dataset/images/train/4501.jpeg #195

Al-Razi-KR opened this issue Jun 24, 2020 · 26 comments · Fixed by #2042
Labels
bug Something isn't working

Comments

@Al-Razi-KR
Copy link

Input:

python3 train.py --img 640 --batch 16 --epochs 5 --data ./data/dataset.yaml --cfg ./models/yolov5s.yaml --weights weights/yolov5s.pt

Output:

Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused

(train.py:19670): Gdk-CRITICAL **: 18:33:23.890: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
Apex recommended for faster mixed precision training: https://github.com/NVIDIA/apex
{'lr0': 0.01, 'momentum': 0.937, 'weight_decay': 0.0005, 'giou': 0.05, 'cls': 0.58, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.014, 'hsv_s': 0.68, 'hsv_v': 0.36, 'degrees': 0.0, 'translate': 0.0, 'scale': 0.5, 'shear': 0.0}
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)

Namespace(adam=False, batch_size=16, bucket='', cache_images=False, cfg='./models/yolov5s.yaml', data='./data/dataset.yaml', device='', epochs=5, evolve=False, img_size=[640], multi_scale=False, name='', noautoanchor=False, nosave=False, notest=False, rect=False, resume=False, single_cls=False, weights='weights/yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='GeForce RTX 2080 Ti', total_memory=11019MB)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/

              from  n    params  module                                  arguments
  0             -1  1      3520  models.common.Focus                     [3, 32, 3]
  1             -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2             -1  1     19904  models.common.BottleneckCSP             [64, 64, 1]
  3             -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4             -1  1    161152  models.common.BottleneckCSP             [128, 128, 3]
  5             -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6             -1  1    641792  models.common.BottleneckCSP             [256, 256, 3]
  7             -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8             -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]
  9             -1  1   1248768  models.common.BottleneckCSP             [512, 512, 1, False]
 10             -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11             -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12        [-1, 6]  1         0  models.common.Concat                    [1]
 13             -1  1    378624  models.common.BottleneckCSP             [512, 256, 1, False]
 14             -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15             -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16        [-1, 4]  1         0  models.common.Concat                    [1]
 17             -1  1     95104  models.common.BottleneckCSP             [256, 128, 1, False]
 18             -1  1      3483  torch.nn.modules.conv.Conv2d            [128, 27, 1, 1]
 19             -2  1    147712  models.common.Conv                      [128, 128, 3, 2]
 20       [-1, 14]  1         0  models.common.Concat                    [1]
 21             -1  1    313088  models.common.BottleneckCSP             [256, 256, 1, False]
 22             -1  1      6939  torch.nn.modules.conv.Conv2d            [256, 27, 1, 1]
 23             -2  1    590336  models.common.Conv                      [256, 256, 3, 2]
 24       [-1, 10]  1         0  models.common.Concat                    [1]
 25             -1  1   1248768  models.common.BottleneckCSP             [512, 512, 1, False]
 26             -1  1     13851  torch.nn.modules.conv.Conv2d            [512, 27, 1, 1]
 27   [-1, 22, 18]  1         0  models.yolo.Detect                      [4, [[116, 90, 156, 198, 373, 326], [30, 61, 62, 45, 59, 119], [10, 13, 16, 30, 33, 23]]]
Model Summary: 191 layers, 7.26318e+06 parameters, 7.26318e+06 gradients

Optimizer groups: 62 .bias, 70 conv.weight, 59 other
Caching labels ../dataset/labels/train.npy (8582 found, 0 missing, 0 empty, 674 duplicate, for 8582 images): 100%|████| 8582/8582 [00:00<00:00, 24748.73it/s]
Caching labels ../dataset/labels/val.npy (1958 found, 0 missing, 0 empty, 135 duplicate, for 1958 images): 100%|██████| 1958/1958 [00:00<00:00, 25395.42it/s]

Analyzing anchors... Best Possible Recall (BPR) = 0.9977
Image sizes 640 train, 640 test
Using 1 dataloader workers
Starting training for 5 epochs...

     Epoch   gpu_mem      GIoU       obj       cls     total   targets  img_size
       0/4     4.72G    0.1325   0.03671   0.05608    0.2252        60       640:   2%|▊                                    | 12/537 [00:07<03:35,  2.44it/s]Traceback (most recent call last):
  File "train.py", line 407, in <module>
    train(hyp)
  File "train.py", line 237, in train
    for i, (imgs, targets, paths, _) in pbar:  # batch -------------------------------------------------------------
  File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1081, in __iter__
    for obj in iterable:
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
    return self._process_data(data)
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/fyp2020s1/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/fyp2020s1/YoLo_v5/yolov5/utils/datasets.py", line 446, in __getitem__
    img, labels = load_mosaic(self, index)
  File "/home/fyp2020s1/YoLo_v5/yolov5/utils/datasets.py", line 573, in load_mosaic
    img, _, (h, w) = load_image(self, index)
  File "/home/fyp2020s1/YoLo_v5/yolov5/utils/datasets.py", line 534, in load_image
    assert img is not None, 'Image Not Found ' + path
AssertionError: Image Not Found ../dataset/images/train/4501.jpeg

       0/4     4.72G    0.1325   0.03671   0.05608    0.2252        60       640:   2%|▊ 
@Al-Razi-KR Al-Razi-KR added the bug Something isn't working label Jun 24, 2020
@glenn-jocher
Copy link
Member

@Al-Razi-KR your custom dataset is not correct, you have missing images.

@Al-Razi-KR
Copy link
Author

No, I checked all the images are inside.
image

@fuddyduddy
Copy link

fuddyduddy commented Oct 14, 2020

I just encounter the same problem at just. The image is in the train folder as well. Turns out it is because the origin file is .gif. I changed the format with online converter then everything is fine again.

@zhongxu-Sun
Copy link

No, I checked all the images are inside.
image

Hello, did you solve it? I encountered the same problem as you. The picture does exist!

@Al-Razi-KR
Copy link
Author

No, I checked all the images are inside.
image

Hello, did you solve it? I encountered the same problem as you. The picture does exist!

Yes, one of my image was in Gif format. Yolov5 only support a few formats which are mentioned in their documentary. Check your image format using PIL. The cause of my issue was I renamed all the images as jpeg without.

@zhongxu-Sun
Copy link

zhongxu-Sun commented Jan 25, 2021 via email

@Al-Razi-KR
Copy link
Author

What you mean is that there are pictures in other illegal formats in the data set, not the one that was reported as an error, because the format of the picture that could not be found is correct and it also exists.

------------------ 原始邮件 ------------------
发件人: "ultralytics/yolov5" <notifications@github.com>;
发送时间: 2021年1月25日(星期一) 晚上6:01
收件人: "ultralytics/yolov5"<yolov5@noreply.github.com>;
抄送: "孙中旭"<2043593085@qq.com>;"Comment"<comment@noreply.github.com>;
主题: Re: [ultralytics/yolov5] AssertionError: Image Not Found ../dataset/images/train/4501.jpeg (#195)

No, I checked all the images are inside.

Hello, did you solve it? I encountered the same problem as you. The picture does exist!

Yes, one of my image was in Gif format. Yolov5 only support a few formats which are mentioned in their documentary. Check your image format using PIL. The cause of my issue was I renamed all the images as jpeg without.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.

Try:

filename = r"../dataset/images/train/4501.jpeg"
img = Image. open(filename)
print( img. format)
```

@glenn-jocher
Copy link
Member

Supported data formats are:

yolov5/utils/datasets.py

Lines 26 to 29 in a41d910

# Parameters
help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes
vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes

@zhongxu-Sun
Copy link

Try:

filename = r"../dataset/images/train/4501.jpeg"
img = Image. open(filename)
print( img. format)

Thanks, indeed, it has been resolved!

@zhongxu-Sun
Copy link

Supported data formats are:

yolov5/utils/datasets.py

Lines 26 to 29 in a41d910

# Parameters
help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes
vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes

Thanks, i got it, it's solved

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 26, 2021

@zhongxu-Sun @Al-Razi-KR I was reviewing this. Our intention is to catch corrupted images/labels before training ever starts as part of the label caching process, so I think we may want to add additional checks for this failure mode.

The proper workflow is that dataloader checks flag the corrupted image/label and neglect to include it in the training image list, but there would be no errors and training would proceed smoothly (ignoring the corrupted image).

@glenn-jocher
Copy link
Member

@zhongxu-Sun @Al-Razi-KR I've opened PR #2042 to fix this issue. This runs an additional check on the actual image format using PIL img.format. For this check to actually run you'd need to delete your existing *.cache files in your dataset directories, which will trigger a new caching.

Verified update works correctly. I added a GIF to COCO128 and renamed it with a .jpg extension. The new check caught the file, removed it from the training images, and then trains correctly.

Screen Shot 2021-01-25 at 8 53 03 PM

@julian-douglas
Copy link

Hi, I am having trouble with this! It works in Google colab but I am trying to run it in Ubuntu using my university's servers, but for some reason it's not working. I am getting:

AssertionError: Image Not Found ../train/images/0006476c7a10ac38_jpg.rf.d0903a65b762f4a64848d7f12956628f.jpg

But when I go into a Jupyter notebook (using the university's servers) and do:

import cv2
img = cv2.imread('../train/images/0006476c7a10ac38_jpg.rf.d0903a65b762f4a64848d7f12956628f.jpg')
print(img)

The answer is an array of colour values, and not None. How can I resolve this? I have opencv-contrib-python version 4.5.2.54.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 1, 2021

@julian-douglas
Copy link

Hi Glenn,

Thanks for the swift reply. I followed the Roboflow tutorial and exported my dataset (70,000 images) into the YOLOv5 format, e.g. 40 0.5036057692307693 0.5048076923076923 0.47596153846153844 0.4411057692307692. The thing is, they sorted it into three directories: train, valid and test. In the 'Train Custom Data' section you just posted I see only dataset/images/im0.jpg. How does it know the train/valid/test ratio?

I am also a bit confused by section 3 of 'Train Custom Data'.. it says that /coco128 is assumed to be next to /yolov5; is that a requirement? I am using an external server and I don't think I can move files within the directory to be next to each other using Ubuntu. Is there a way to specify the directory of the training images?

@glenn-jocher
Copy link
Member

@julian-douglas you can put your dataset anywhere you want and split it anyway you want. See other datasets for example splits, i.e. VOC:

yolov5/data/VOC.yaml

Lines 9 to 20 in c6c88dc

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/VOC
train: # train images (relative to 'path') 16551 images
- images/train2012
- images/train2007
- images/val2012
- images/val2007
val: # val images (relative to 'path') 4952 images
- images/test2007
test: # test images (optional)
- images/test2007

@julian-douglas
Copy link

So why is it able to find the images first? It scans all of the images correctly (final output was train: Scanning ' /homes/jd2334/trainYOLO/labels' for images and labels... 58069 found) and then fails the assert image is not None.

The beginning of my data.yaml file looks like this:
train: /homes/jd2334/trainYOLO/images
val: /homes/jd2334/validYOLO/images
test: /homes/jd2334/testYOLO/images

And the command that I am running is:
python3 train.py --img 416 --batch 132 --epochs 10 --data ../data.yaml --cfg models/yolov5s.yaml --weights '' --name yolov5s_results --cache.

@glenn-jocher
Copy link
Member

@julian-douglas these two lines load an image and verify the result. If the result is None this typically indicates that cv2.imread() did not find the path successfully.

yolov5/utils/datasets.py

Lines 604 to 605 in c6c88dc

img = cv2.imread(path) # BGR
assert img is not None, 'Image Not Found ' + path

@zhongxu-Sun
Copy link

zhongxu-Sun commented Jul 2, 2021 via email

@shamstuqa
Copy link

@glenn-jocher
hello
i have the same problem ,do you know how can i solve ??
assert img is not None, 'Image Not Found ' + path
AssertionError: Image Not Found data\train\images\1_mp4-0_jpg.rf.315eb49cc4f90c9a3bf3f7cd38c7a13d.jpg

@Viper7-adking
Copy link

Error. Image not found with id: train_32
我最后生成map图的时候会报这个错误,result里面也没有生成结果图诶,是什么原因呢求解答

@glenn-jocher
Copy link
Member

@Viper7-adking 你好!通常这个错误 Error. Image not found with id: train_32 说明训练数据中缺失了某些图片,导致在生成 map 图时找不到这些图片。你需要检查你的训练数据,看看是否存在跟 train_32 对应的图片文件。如果确实不存在的话,通常需要把对应的行从 annotation file 中删除,并重新开始训练。

@Viper7-adking
Copy link

谢谢!但是为什么我训练好的模型的map里会多一个face,相对应的txt里只有nomask 和mask两个标签

@glenn-jocher
Copy link
Member

@Viper7-adking 你好,生成的 map 文件如果包含了在标签里没有的类别(face),可能是你的标签文件有误,也有可能是在训练网络时使用了错误的权重文件,以至于网络在输出时产生了错误预测。建议你检查一下标签文件和使用的权重文件,确保文件没有错误。如果仍然无法解决问题,可以考虑重新收集数据,重新训练新的模型。

@Viper7-adking
Copy link

非常感谢

@glenn-jocher
Copy link
Member

@Viper7-adking 不客气!如有其他问题,欢迎继续提问,祝您好运!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants