Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML conversion/export and usage (non-max suppression) #5157

Closed
pytholic opened this issue Oct 13, 2021 · 40 comments · Fixed by #6195
Closed

CoreML conversion/export and usage (non-max suppression) #5157

pytholic opened this issue Oct 13, 2021 · 40 comments · Fixed by #6195
Labels
question Further information is requested Stale

Comments

@pytholic
Copy link

Hi and thank you for the repository.

I trained my model on custom dataset and now trying to export it as coreml model to use in iOS. However I am facing some difficulties.

  1. When I convert my best.pt model using the provided script, I see a lot of "Adding op" like this:
scikit-learn version 0.20.0 is not supported. Minimum required version: 0.17. Maximum required version: 0.19.2. Disabling scikit-learn conversion API.
TensorFlow version 2.6.0 detected. Last version known to be fully compatible is 2.3.1 .
Keras version 2.6.0 detected. Last version known to be fully compatible of Keras is 2.2.4 .
Adding op 'pow' of type pow
Adding op 'pow_y_0' of type const
Adding op 'mul_1' of type mul
Adding op 'mul_1_x_0' of type const
Adding op 'add' of type add
Adding op 'mul_2' of type mul
Adding op 'mul_2_x_0' of type const
Adding op 'tanh' of type tanh
Adding op 'add_1' of type add
Adding op 'add_1_x_0' of type const
Adding op 'mul' of type mul
Adding op 'mul_x_0' of type const
Adding op 'mul_3' of type mul
Adding op 'pow' of type pow
Adding op 'pow_y_1' of type const
Adding op 'mul_1' of type mul
Adding op 'mul_1_x_1' of type const
.
.
.

Is it normal?

  1. In the converted model, I lose the concatenation part (combing output of three levels) and also NMS. Then I manually implement NMS in swift but I really want it to be integrated in the model itself. Can you help with this? I think I am missing something during conversion step maybe.

  2. When I test my code on iOS, in the end I have 10 boxes with highest scores. However, some of these boxes have exactly same score (even up to 7 decimal points) which I think should not be possible because boxes are different. So then when I apply NMS to keep only the best box, it doesn't really work because the some boxes have same score. Can you give some idea regarding this?

BoundingBox(classIndex: 0, score: 0.91258967, rect: (133.61363220214844, 171.43484497070312, 181.71127319335938, 233.848876953125))
BoundingBox(classIndex: 0, score: 0.91258967, rect: (218.47067260742188, 217.3438262939453, 75.99720764160156, 142.03091430664062))
BoundingBox(classIndex: 0, score: 0.91258967, rect: (260.2138977050781, 256.07952880859375, 56.5107421875, 64.55950164794922))
BoundingBox(classIndex: 0, score: 0.9090115, rect: (259.60150146484375, 224.28857421875, 57.72898483276367, 65.09613800048828))
BoundingBox(classIndex: 0, score: 0.8489735, rect: (188.3462371826172, 214.0371551513672, 73.1979751586914, 148.64938354492188))
BoundingBox(classIndex: 0, score: 0.8489735, rect: (229.7305908203125, 254.57789611816406, 54.429264068603516, 67.56790161132812))
BoundingBox(classIndex: 0, score: 0.81404513, rect: (218.50527954101562, 264.7089538574219, 107.66510009765625, 47.08483123779297))
BoundingBox(classIndex: 0, score: 0.7050861, rect: (290.6436462402344, 100.86282348632812, 106.90132141113281, 38.55246353149414))
BoundingBox(classIndex: 0, score: 0.7050861, rect: (326.1788330078125, 94.99613952636719, 51.8309440612793, 50.285823822021484))
BoundingBox(classIndex: 0, score: 0.7050861, rect: (343.8971252441406, 109.24378967285156, 32.39434051513672, 21.790523529052734))

I have also attached converted mlmodel image with properties. Thank you in advance!!

Screenshot from 2021-10-13 10-37-59

@pytholic pytholic added the question Further information is requested label Oct 13, 2021
@pytholic pytholic changed the title Non-maximum suppression in CoreML model CoreMl conversion/export and usage (non-max suppression) Oct 13, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Oct 13, 2021

👋 Hello @rajahaseeb147, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@pytholic pytholic changed the title CoreMl conversion/export and usage (non-max suppression) CoreML conversion/export and usage (non-max suppression) Oct 13, 2021
@softmatic
Copy link

softmatic commented Oct 13, 2021

I found that the CoreML export worked when using CoreMLTools 5.0b3 and set the model to eval in export.py. This will give four outputs, three for the detectors and one for the concatenated output (with 640px that's a 1 x 25200 x 85 MLMultiArray). No NMS though, I wrote that myself. With this setup I get 15FPS inference time on an A13 CPU (SE II), 45-50 FPS with 320px.

@pytholic
Copy link
Author

@softmatic Hi and thank you for your reply. I used 5.0b3 and now I have the concat layer. Did you upload your code to any repo? Would appreciate if I can take a look at your NMS part and how to process the concatenated output array. Thanks again!

@pytholic
Copy link
Author

@softmatic on a side note, i worked with this https://github.com/dbsystel/yolov5-coreml-tools repo and my final model includes NMS now!

@softmatic
Copy link

softmatic commented Oct 14, 2021

No repo, but here's a snippet that I use for testing (775 is the concat layer in my model with yolov5s, yours will be different, use Netron to find it):

if let output = try? cml.prediction(image: pixelBuffer){
  let confidenceThreshold : Float = 0.3
  let output1D: [Float] = try Array(UnsafeBufferPointer<Float>(output._775))
  let rows = output._775.shape[1].intValue // 25200 @ 640x640 
  
  for i in 0..<rows{
    let confidence = output1D[(i * 85) + 4]
    if(confidence > confidenceThreshold){
      let row = Array(output1D[(i * 85)..<(i + 1) * 85])
      let classes = Array(row.dropFirst(5))
      let classIndex : Int = classes.firstIndex(of: classes.max() ?? 0) ?? 0
      let detection: [Float] = [row[0] - row[2]/2, row[1] - row[3]/2, row[2], row[3], confidence, Float(classIndex)]
      if detections[String(classIndex)] != nil{
        var existing = detections[String(classIndex)]!
        existing.append(detection)
        detections[String(classIndex)] = existing
      }
      else{
        detections[String(classIndex)] = [detection]
      }
    }
  }
}

I put the detections in a dict with the class no. as the key. That makes it easier to do NMS later for each class. Obviously, going through the rows one by one like this is sub-optimal. Should be possible to speed this up with either vDSP or a compute shader.

@pytholic
Copy link
Author

@softmatic Thanks a lot for sharing this and explaining. Appreciate it!!

@softmatic
Copy link

softmatic commented Oct 14, 2021

As a follow up, some experimenting shows that the loop is considerably faster than using compactMap or map to filter the rows with a confidence above the threshold, e.g.:

let output1D: [Float] = try Array(UnsafeBufferPointer<Float>(output._775))

let candidates : [Int] = output1D.enumerated().compactMap { index, element in
           (index % 85) != 4 ? nil : element > confidenceThreshold ? (index - 4) / 85 : nil
          }

@pytholic
Copy link
Author

@softmatic Thanks a lot, I will check this out as well. I made my model work with the repo that I mentioned above. Now my model has NMS integrated into it. I will also use concatenated model with your NMS script and then compare results from both models. Appreciated it :))

@softmatic
Copy link

Got it up to 60FPS (320x320) on an SE II with some optimization. Video.

@abhimanyuchadha96
Copy link

@softmatic Great work, can you elaborate on your optimization?

@pytholic
Copy link
Author

@softmatic That looks pretty good! I still haven't tested my model on video feed, just images for now. May I ask what was the key optimization step?

@softmatic
Copy link

softmatic commented Oct 16, 2021

@rajahaseeb147 Thanks! I had forked the TF Lite iOS sample as a starting point but found that their way of resizing the pixel buffer before inference took almost 20ms. Swapping this part for a CI-based solution reduced this to 2ms. I also do the overlays with a CALayer rather than a overlay UIView, that's another speedup. Finally, I made the size of the video capture depend on the model size; no point in capturing Full HD if you scale it down to 320x320 anyway.

BTW, I also tried CoreML INT8 quantization to see if inference is faster but to my surprise it didn't make any difference (just a smaller model). This is very different behaviour from TF Lite where you get a 2-3x speedup.

@tcollins590
Copy link

@softmatic Thanks a lot, I will check this out as well. I made my model work with the repo that I mentioned above. Now my model has NMS integrated into it. I will also use concatenated model with your NMS script and then compare results from both models. Appreciated it :))

We’re you able to get the export repo working with the current version of Yolov5 or are you using an older version?

I’ve been struggling to get export to work.

@tcollins590
Copy link

@rajahaseeb147

Where you able to get the NMS working from the export repo above on the current Yolov5 version or a previous version?

I’ve been struggling to get export to work

@pytholic
Copy link
Author

pytholic commented Oct 22, 2021

@tylercollins590 Hi mate. Have you seen the link repo?
So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@softmatic
Copy link

softmatic commented Oct 22, 2021

@tylercollins590 I'm using the latest version (checked out on Wednesday)

Here's my requirements (Python 3.8.10):

absl-py==0.15.0
anyio==3.3.4
argon2-cffi==21.1.0
astunparse==1.6.3
attrs==21.2.0
Babel==2.9.1
backcall==0.2.0
bleach==4.1.0
cachetools==4.2.4
certifi==2021.10.8
cffi==1.15.0
charset-normalizer==2.0.7
clang==5.0
coremltools==5.0b3
cycler==0.10.0
debugpy==1.5.1
decorator==5.1.0
defusedxml==0.7.1
entrypoints==0.3
flatbuffers==1.12
gast==0.4.0
google-auth==2.3.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.41.0
h5py==3.1.0
idna==3.3
ipykernel==6.4.2
ipython==7.28.0
ipython-genutils==0.2.0
jedi==0.18.0
Jinja2==3.0.2
json5==0.9.6
jsonschema==4.1.1
jupyter-client==7.0.6
jupyter-core==4.8.1
jupyter-server==1.11.1
jupyterlab==3.2.0
jupyterlab-pygments==0.1.2
jupyterlab-server==2.8.2
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.2
Markdown==3.3.4
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.3
mistune==0.8.4
mpmath==1.2.1
nbclassic==0.3.2
nbclient==0.5.4
nbconvert==6.2.0
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.4.5
numpy==1.19.5
oauthlib==3.1.1
onnx==1.10.1
opencv-python==4.5.3.56
opt-einsum==3.3.0
packaging==21.0
pandas==1.3.4
pandocfilters==1.5.0
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.4.0
prometheus-client==0.11.0
prompt-toolkit==3.0.20
protobuf==3.18.1
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.10.0
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.2
pytz==2021.3
PyYAML==6.0
pyzmq==22.3.0
requests==2.26.0
requests-oauthlib==1.3.0
requests-unixsocket==0.2.0
rsa==4.7.2
scipy==1.7.1
seaborn==0.11.2
Send2Trash==1.8.0
six==1.15.0
sniffio==1.2.0
sympy==1.9
tensorboard==2.7.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.6.0
tensorflow-estimator==2.6.0
termcolor==1.1.0
terminado==0.12.1
testpath==0.5.0
thop==0.0.31.post2005241907
torch==1.9.1+cu111
torchaudio==0.9.1
torchvision==0.10.1+cu111
tornado==6.1
tqdm==4.62.3
traitlets==5.1.0
typing-extensions==3.7.4.3
urllib3==1.26.7
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==1.2.1
Werkzeug==2.0.2
wrapt==1.12.1

Export command:

python export.py --weights yolov5s.pt --imgsz 320 320 --include coreml

As written above, I had to set the model to eval (as per the coremlexport docs) for export (line 118 in export.py) or I wouldn't get the concat layer. Output of the export command is here, converted model is here. Note that this model does not do NMS, you'll have to implement that yourself.

@tcollins590
Copy link

@softmatic
This is very helpful, thank you

@pytholic
Copy link
Author

@rajahaseeb147 Thanks! I had forked the TF Lite iOS sample as a starting point but found that their way of resizing the pixel buffer before inference took almost 20ms. Swapping this part for a CI-based solution reduced this to 2ms. I also do the overlays with a CALayer rather than a overlay UIView, that's another speedup. Finally, I made the size of the video capture depend on the model size; no point in capturing Full HD if you scale it down to 320x320 anyway.

BTW, I also tried CoreML INT8 quantization to see if inference is faster but to my surprise it didn't make any difference (just a smaller model). This is very different behaviour from TF Lite where you get a 2-3x speedup.

@softmatic @tylercollins590 I found out that if you use Apple Vission automatically resizes the input buffer to the required dimension by the model, so no need to resize it manually!

I tested it actually and it detects fine without any resizing. However, My inference speed is slow at the moment.

@pytholic
Copy link
Author

@rajahaseeb147 Thanks! I had forked the TF Lite iOS sample as a starting point but found that their way of resizing the pixel buffer before inference took almost 20ms. Swapping this part for a CI-based solution reduced this to 2ms. I also do the overlays with a CALayer rather than a overlay UIView, that's another speedup. Finally, I made the size of the video capture depend on the model size; no point in capturing Full HD if you scale it down to 320x320 anyway.
BTW, I also tried CoreML INT8 quantization to see if inference is faster but to my surprise it didn't make any difference (just a smaller model). This is very different behaviour from TF Lite where you get a 2-3x speedup.

@softmatic @tylercollins590 I found out that if you use Apple Vission automatically resizes the input buffer to the required dimension by the model, so no need to resize it manually!

I tested it actually and it detects fine without any resizing. However, My inference speed is slow at the moment.

I am getting around 27 fps now on the new M1 Ipad Pro 11.

@MBichurin
Copy link

@softmatic hi! Thank you for your guidelines on how to export CoreML in evaluation mode. However, it didn't work for me(
I've tried it with both Python 3.7 and 3.8. Installed the 5.0b3 version of coremltools as you said, but it keeps failing with a "No module named 'coremltools.libmodelpackage'" message.
I see @glenn-jocher writing everywhere CoreML model needs to be ran in train mode to avoid the Detect() layer. So how did you solve it?
Thanks in advance

@MBichurin
Copy link

In case if someone still struggles with this, I solved by simply switching to Linux😅
I found out that some users had some problems using coremltools on Windows in the past. It seems like the package's authors still don't provide full support of Windows, even though I was able to run the code with no changes on Windows too

@softmatic
Copy link

@MBichurin, my setup was WSL 2 (Standard Ubuntu distro) on Windows 11. This works for training and export with the restriction that only one of my two GPUs is used for training (pytorch uses an NCCL version that is not compatible with WSL 2 atm., see NVIDIA/nccl#442)

@github-actions
Copy link
Contributor

github-actions bot commented Dec 18, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@glenn-jocher glenn-jocher linked a pull request Jan 5, 2022 that will close this issue
@glenn-jocher
Copy link
Member

@pytholic @softmatic @tylercollins590 @abhimanyuchadha96 @MBichurin good news 😃! Your original issue may now be fixed ✅ in PR #6195. This PR adds support for YOLOv5 CoreML inference.

!python export.py --weights yolov5s.pt --include coreml  # CoreML export
!python detect.py --weights yolov5s.mlmodel  # CoreML inference (MacOS-only)
!python val.py --weights yolov5s.mlmodel  # CoreML validation (MacOS-only)

model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.mlmodel')  # CoreML PyTorch Hub model

Screen Shot 2022-01-04 at 5 41 07 PM

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@pytholic
Copy link
Author

pytholic commented Jan 5, 2022

@glenn-jocher Thank you for the update. Cheers!

@yuanning6
Copy link

Hello, I tried various methods to use this tool https://github.com/dbsystel/yolov5-coreml-tools you have used, but I have encountered the issues mentioned in the repo. I tried all solutions they have proposed and I wasn't able to solve them. Yolov5 converting to coreml this problem has been bothering me for three days, do you have other methods? Thank you so so much! 🙏🙏🙏

@pytholic
Copy link
Author

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

@yuanning6
Copy link

yuanning6 commented Mar 16, 2022

@tylercollins590 Hi mate. Have you seen the link repo?
So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.
So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.
https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

@pytholic
Copy link
Author

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

Well that is a blessing in disguise then xD
Glad that it worked thought. Not so sure about the 'Nonetype' error that you got.

In my case I did not retrain my model with 4.0 version. Model was 6.0, but the source code during conversion was 4.0.

@pytholic
Copy link
Author

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

One more thing, when you downlod source code of version 4.0 during conversion, do you change number of classes (nc) to your number of classes in yolov*.yaml?

@yuanning6
Copy link

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

One more thing, when you downlod source code of version 4.0 during conversion, do you change number of classes (nc) to your number of classes in yolov*.yaml?

Thanks for your reminding! I changes nc in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5/models/yolov5s.yaml, also the range in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py, both to 2 which is my number of classes. But I still got the error:

Traceback (most recent call last):
File "", line 1, in
File "/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py", line 312, in main
model = torch.load(opt.model_input_path, map_location=torch.device('cpu'))[
File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5/models/common.py'>

When I ran yolov5s.pt from v4.0, everything was fine.

@pytholic
Copy link
Author

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

One more thing, when you downlod source code of version 4.0 during conversion, do you change number of classes (nc) to your number of classes in yolov*.yaml?

Thanks for your reminding! I changes nc in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5/models/yolov5s.yaml, also the range in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py, both to 2 which is my number of classes. But I still got the error:

Traceback (most recent call last):

File "", line 1, in

File "/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py", line 312, in main

model = torch.load(opt.model_input_path, map_location=torch.device('cpu'))[

File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 594, in load

return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)

File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 853, in _load

result = unpickler.load()

AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5/models/common.py'>

When I ran yolov5s.pt from v4.0, everything was fine.

What happens when you run yolo5s.pt from version 6.0?

@yuanning6
Copy link

@tylercollins590 Hi mate. Have you seen the link repo?

So basically my model was trained using the latest release of yolov5 i.e. 6.0, but while exporting I was facing an error because this conversion repo does not support release 6.0.

So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case.

https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5

@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing?

I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:)

One more thing, when you downlod source code of version 4.0 during conversion, do you change number of classes (nc) to your number of classes in yolov*.yaml?

Thanks for your reminding! I changes nc in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5/models/yolov5s.yaml, also the range in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py, both to 2 which is my number of classes. But I still got the error:

Traceback (most recent call last):

File "", line 1, in
File "/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py", line 312, in main

model = torch.load(opt.model_input_path, map_location=torch.device('cpu'))[

File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 594, in load

return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)

File "/Users/mac/opt/anaconda3/envs/python38/lib/python3.8/site-packages/torch/serialization.py", line 853, in _load

result = unpickler.load()

AttributeError: Can't get attribute 'SPPF' on <module 'models.common' from '/Users/mac/Desktop/Yolov5Export/poetry_yolov5/yolov5/models/common.py'>
When I ran yolov5s.pt from v4.0, everything was fine.

What happens when you run yolo5s.pt from version 6.0?

It got the same error. sad...

@yuanning6
Copy link

I just tried retrain my model with v4.0, but the system automatically downloaded v6.1's yolov5s.pt... OMG, the error came up again...
May I ask you to run my .pt file on your computer? This is one part of my graduation project, I have no way else to solve this problem. 😭😭😭

@pytholic
Copy link
Author

I just tried retrain my model with v4.0, but the system automatically downloaded v6.1's yolov5s.pt... OMG, the error came up again... May I ask you to run my .pt file on your computer? This is one part of my graduation project, I have no way else to solve this problem. 😭😭😭

@evelynTen somehow I missed your comment. Sorry!!
Were you able to resolve it? I don't have access to my old PC so I might have to setup every thing from scratch.

@mshamash
Copy link

mshamash commented Apr 4, 2022

Submitted a pull request #7263 which has an updated export.py script so that the exported CoreML model has a NMS layer

@tcollins590
Copy link

@mshamash really appreciate you putting together this PR. Does this work on the current version of YoloV5 as well as the v6 models?

@mshamash
Copy link

mshamash commented Apr 4, 2022

@tylercollins590 I tested it on the current version of the Yv5 models, "v6.1", as well as "v5" models. I haven't tested it on "v6" models, but am fairly confident it would work on those too.

@tcollins590
Copy link

@mshamash That's great news! I'm going to give your PR a try for myself. Thanks again for the effort.

@wmcnally
Copy link

@mshamash thank you for this code. It works well. One thing I noticed is that the CoreML NMS layer outputs confidences that are much higher than the raw confidences output from CoreMLExportModel. Do you have any idea why that is?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants